Categories
Design Strategy Talks & Workshops

The Need for Quantifying and Qualifying Strategy

In this post, I talk about the need of quantifying and qualifying strategy while making a case for a set of tools that helps teams find objective ways to value design solutions to justify the product experience investments in that bring us ever closer to our vision and goals.

In the last few posts, I talked about how designers and strategists can facilitate the discussions that lead to making good decisions, the importance of creating choices, and how priorities make things happen.

In this post, I’ll talk about the need for set of tools for quantifying and qualifying strategy that both empowers intuition and creativity, but also helps team find objective ways to value design solutions to justify the experience investments in that bring us ever closer to our vision and goals.

Table Of Contents
  1. TL;DR;
  2. Quantifying and Qualifying Design
  3. Quantifying and Qualifying Value
  4. The Right Time for Quantifying and Qualifying
  5. Recommended Reading

TL;DR;

  • Design may enhance performance, but unless there are metrics to gauge that benefit, the difference it makes depends on conjecture and faith.
  • Measurement allows comparison of expected outcomes with actual outcomes and enables you to adjust strategic choices accordingly.
  • There are challenges to consider while quantifying and qualifying strategy:
    • Different organizations have specific strategic choices about winning that uniquely positions in their corresponding industry.
    • Different organizations are different levels of design maturity.
    • Some metrics are collected too late: some strategic decisions made early on could be quite expensive to reverse or pivot at that point.
    • Beware of how you measure: quantitative metrics are good to explain ‘What’ and ‘How many’ of a given hypothesis; the ‘Why’ are usually better captured through qualitative research methods
    • The world is different now: some of the signals and indicators that worked for measuring success may not work for new products or services you are trying to create.
  • We need objective ways to value design solutions to justify the experience investments, and to look at the different points in the strategic planning and execution and identify the discussions that strategists should facilitate investment discussion, pivot and risk mitigation, while tracking and tracing the implementation of strategy to ensure we are bringing value for our customers and out business.
  • Measurement allows comparison of expected outcomes with actual outcomes and enables you to adjust strategic choices accordingly.
  • Your initial product strategy may contain plenty of assumptions and risks, and you may as well discover that the strategy is wrong and does not work. If that is the case, then you have two choices:
    • Stop and let go of your vision, or
    • stick with the vision and change the strategy, which is also called pivot.
  • When product managers, designers and strategists are crafting their strategy or working on discovery phase, the kind of user and customer insights they are looking for are really hard to acquire through quantitative metrics: most insights (especially desirability and satisfaction) would come from preference data.

Quantifying and Qualifying Design

As businesses increasingly recognise the power of design to provide significant benefits, business executives increasingly are asking for metrics to evaluate the performance of design. What is needed is a framework for quantifying and qualifying design and strategy, a specific set of criteria and methods to be used as a structure to define and measure the values of design (Lockwood, T., “Design Value: A Framework for Measurement” in Building Design Strategy, 2008).

“Design may enhance performance, but unless there are metrics to gauge that benefit, the difference it makes depends on conjecture and faith.”

Lockwood, T., “Design Value: A Framework for Measurement” in Building Design Strategy: Using Design to Achieve Key Business Objectives, (2008)

The following identifies ten categories of design measurement all of which are relevant to business criteria and can be used as a framework for measuring the value of design (Lockwood, T., “Design Value: A Framework for Measurement” in Building Design Strategy: Using Design to Achieve Key Business Objectives, 2008):

  1. Purchase Influence / Emotion
  2. Enable Strategy / enter new markets
  3. Build brand image and corporate reputation
  4. Improve time to market and development process
  5. Design return of investment (ROI) / cost savings
  6. Enable product and service innovation
  7. Increase customer satisfaction / develop communities of customers
  8. Design patents and trademarks / create intellectual property
  9. Improve usability
  10. Improve sustainability

On a personal note: I’ve been in too many discussions in which stakeholders and designers get caught up on what is the Return of Investment (ROI) of design. If you keep having to justify the value that design brings to the organization, or you find it difficult to have meaningful conversations with stakeholders on how to make the connection between their business to any of the ten categories above, you may consider if this is the right organization for you to work for.

group of people sitting in front of table
Learn more about the skills required for design strategists to influence influence the decisions that drive design vision forward in Strategy and Stakeholder Management (Photo by Rebrand Cities on Pexels.com)

Value of Design and Metrics

Increasingly, usability practitioners and user researchers are expected to quantify the benefits of their efforts. If they don’t, someone else will — unfortunately that someone else might not use the right metrics or methods (Sauro, J., & Lewis, J. R., Quantifying the user experience. 2016).

When deciding on the most appropriate metrics, two main aspects of the user experience to consider are performance and satisfaction (Tullis, T., & Albert, W., Measuring the user experience. 2013).

Performance Metrics

Performance is all bout what the user actually does in interaction with the product. It includes measuring the degree to which users can successfully accomplish a tasks or a set of tasks during a usability study. There are five basic types of performance metrics (Tullis, T., & Albert, W., Measuring the user experience. 2013):

  1. Tasks success is perhaps the most widely used performance metric. It measures how effectively users are able to complete a give set of tasks.
  2. Time-on-task is a common performance metric that measures how much time is required to complete a task.
  3. Errors reflect the mistakes made during a task. Errors can be useful in point out particularly confusing or misleading parts of an interface.
  4. Efficiency can be assessed by examining the amount of effort a user expenses to complete a task, measured by the number of steps or actions required to complete a task or by the ration of the task success to the average time per task.
  5. Learnability is a way to measure how performance changes over time.
Satisfaction Metrics

Satisfaction is all about what users say or think about their interaction with the product, traditionally captured during post-study surveys. Users might report what is was easy to user, that it was confusing, or that it exceeded their expectations. Users might have opinions about the product being visually appealing or untrustworthy (Tullis, T., & Albert, W., Measuring the user experience, 2013).

Do performance and satisfaction always correlate?

Perhaps surprisingly, performance and satisfaction don’t always go hand-in-hand.

We’ve seen many instances of a user struggling to perform key tasks with an application and then giving it glowing satisfaction rating. Conversely, we’ve seen users give poor satisfaction ratings to an application that worked perfectly

Tullis, T., & Albert, W., “Planning a Usability Study” in Measuring the user experience (2013)

So it is important that you look at both performance and satisfaction metrics to get an accurate overall picture of the user experience (Tullis, T., & Albert, W., Measuring the user experience. 2013).

I’ll cover satisfaction metrics (also known as preference data) extensively later in this post.

Pirate Metrics (a.k.a. AARRR!)

Pirate Metrics—a term coined by venture capitalist Dave McClure—gets its name from the acronym for five distinct elements of building a successful business. McClure categorizes the metrics a startup needs to watch into acquisition, activation, retention, revenue, and referral—AARRR (Croll, A., & Yoskovitz, B. Lean Analytics. 2013).

Pirate Metrics or AARRR!: acquisition, activation, retention, revenue, and referral
Pirate Metrics or AARRR! in Lean Analytics (Croll, A., & Yoskovitz, B., 2013)

McClure recommends tracking two or three key metrics for each of the five elements of his framework. That is a good idea because your conversion funnel ins’t really just one overall metric; you can track the more details metrics, making a distinction between, the macro-metrics and the micro-metrics that relate to them (Olsen, D. The lean product playbook, 2015).

HEART Metrics

What the research team from Google noted was that while small scale frameworks were common place measuring the experience on a large scale via automated means had no framework in place. Thus the Heart Framework is specifically targeted at that kind of measurement. However, the principles are equally useful at a small scale level; though the methodologies used to derive measurements at a smaller scale are likely to be substantially different (Rodden, K., Hutchinson, H., & Fu, X., Measuring the user experience on a large scale: User-centered metrics for web applications, 2010).

There are five metrics used in the HEART framework:

  • Happiness
  • Engagement
  • Adoption
  • Retention
  • Task Success
The Google HEART framework is incredibly useful to measure the quality of your user experience. The metrics are Happiness, Engagement, Adoption, Retention, Task Success.
“Google HEART metrics” in What Makes a Good UX/UI Design? 

HEART can be seen as a manifestation of the Technology Acceptance Model (TAM)—after all, both include Adoption in their names. The TAM is itself a manifestation of the Theory of Reasoned Action. The TRA is a model that predicts behavior from attitudes. The TAM suggests that people will adopt and continue to use technology (the EAR of the HEART) based on the perception of how easy it is to use (H), how easy it actually is to use (T), and whether it’s perceived as useful (H). The SUS, SEQ, and the ease components of the UMUX-Lite and SUPR-Q are all great examples of measuring perceived ease, and bring the Happiness to the HEART model (Sauro, J., Should you love the HEART framework?, 2019):

Jeff Sauro mapped together what he sees as the overlaps among the TRA, TAM, and other commonly collected UX metrics.
Linking the HEART framework with other existing models (TAM, TRA, and metrics that link them) in Should you love the HEART framework? (Sauro, J., 2019)

Design Metrics and Strategy

It’s an old saying that says what gets measured gets done. There is more than a little truth to this. If aspirations are to be achieved, capabilities developed, and management systems created, progress needs to be measured (“Manage What Matters” in Playing to Win: How Strategy Really Works (Lafley, A.G., Martin, R. L., 2013).

Measurement allows comparison of expected outcomes with actual outcomes and enables you to adjust strategic choices accordingly.

“Manage What Matters” in Playing to Win: How Strategy Really Works (Lafley, A.G., Martin, R. L., 2013)

I will refrain from proposing a single metric for quantifying and qualifying design for a few reasons:

  • Different organizations have specific strategic choices about winning that uniquely positions in their corresponding industry: these metrics should take in consideration both the goals of users, but also what is the business trying to learn from the study, then design usability studies accordingly.
  • Different organizations are different levels of design maturity: if you’ve never done any kind of usability studies, it’s not only hard to build the capability to run such studies, but also figure out how to feedback the information back into product decisions.
  • Some of these metrics are discovered too late: since some of these metrics are collected either during usability studies or after the product or service is released, it means that — by the time you collect them — a lot of product decisions have already been made. Some of these decisions could be quite expensive to reverse or pivot at that point, so it might be too late for quantifying and qualifying the success of strategy.
  • Beware of how you measure: quantitative metrics are good to explain ‘What’ and ‘How many’ of a given hypothesis; the ‘Why’ are usually better captured through qualitative research methods
  • The world is different now: some of the signals and indicators that worked for measuring success may not work for new products or services you are trying to create.

Never assume that the metrics and standards used to evaluate the existing business have relevance for the innovation initiative.

Govindarajan, V., & Trimble, C., The other side of innovation: Solving the execution challenge (2010)

Quantifying and Qualifying the value and performance of design does not need to be complex or foreboding. There is a case for intuition, a case for qualitative user research, a case for quantitative research, and a case for synthesis. And there is even room for imponderables, because somethings are simply beyond definition or measurement (Lockwood, T., “Design Value: A Framework for Measurement” in Building Design Strategy, 2008).

Quantitative research can tell you how many customers are doing (or not doing) something. But it won’t tell you why the customers are doing it (or not doing it).

Olsen, D. The lean product playbook, 2015

So, what would a set of tools that both empowers intuition and creativity, but also help us find objective ways to value design solutions look like?

Quantifying and Qualifying Value

What I propose — instead — is we need objective ways to value design solutions to justify the experience investments, and to look at the different points in the strategic planning and execution and identify the discussions that strategists should facilitate while tracking and tracing the implementation of strategy to ensure we are bringing value for customers and business.

Design is the activity of turning vague ideas, market insights, and evidence into concrete value propositions and solid business models. Good design involves the use of strong business model patterns to maximize returns and compete beyond product, price and technology.

Bland, D. J., & Osterwalder, A., Testing business ideas, (2020)

From that perspective, we need to find ways to:

  • Explore (and preferably test) ideas early
  • Facilitate investment discussions by objectively describe business and user value, establishing priorities
  • Asses risk of pursuing ideas, while capturing signals that indicate if/when to pivot if an idea “doesn’t work”
  • Capture and track progress of strategy implementation
A holistic Quantifying and Qualifying set of tools and frameworks should help teams Pivot & Risk Mitigation Assessing the risk, capturing signals, know when to pivot Visibility and Traceability Capturing and tracking progress  Facilitating Investment Discussions Business /User Value, Priorities, Effort, etc Validating / Testing Ideas Finding objective ways to explore (and preferably test) ideas early
Instead of a single metric to measure ROI, let’s look at the different discussions that need to be facilitated while quantifying and qualifying strategy, namely: Pivot and Risk Mitigation, Facilitating Investment Discussions, and Validating / Testing Business Ideas

Validating and Testing Business Ideas

“What do people need?”is a critical question to ask when you build product. Wasting your life’s savings and your investor’s money, risking your reputation, making false promises to employees and potential partners, and trashing months or work you can never get back is a shame. It’s also a shame to find out you were completely delusional when you thought that everyone needed the product you were working on (Sharon, T., Validating Product Ideas, 2016)

Don’t make the mistake of executing business ideas without evidence; test your ideas throughly, regardless of how great they may seem in theory.

Bland, D. J., & Osterwalder, A., Testing Business Ideas: A Field Guide for Rapid Experimentation, 2019).

To test a big business idea, you break it down into smaller chunks of testable hypotheses. These hypotheses cover three types of risk (Bland, D. J., & Osterwalder, Testing Business Ideas: A Field Guide for Rapid Experimentation, 2019):

  • First, that customers aren’t interested in your idea (desirability).
  • Second, that you can’t build and deliver your idea (feasibility).
  • Third, that you can’t earn enough money from your idea (viability).

Creating Hypothesis

Many companies try to deal with complexity with analytical firepower and sophisticated mathematics. That is unfortunate, since the most essential elements of creating a hypothesis can typically be communicated through simple pencil-and-paper sketches (Govindarajan, V., & Trimble, C., The other side of innovation: Solving the execution challenge, 2010.)

The key to dealing with complexity is to focus on having good conversations about assumptions.

Break Down the Hypothesis in The other side of innovation: Solving the execution challenge, Govindarajan, V., & Trimble, C., (2010)

As you sit down with your teams to plan out your next initiative, ask them these questions (Gothelf, J., & Seiden, J., Sense and respond. 2017):

  • What is the most important thing (or things) we need to learn first?
  • What is the fastest, most efficient way to learn that?
Learn more about how to ask questions (turned on pendant lamp)
Learn more about how to ask questions that ensure teams are making good decisions in Strategy, Facilitation, and the Art of Asking Questions (Photo by Burak K on Pexels.com)

If you only have one hypothesis to test it’s clear where to spend the time you have to do discovery work. If you have many hypotheses, how do you decide where your precious discovery hours should be spent? Which hypotheses should be tested? Which ones should be de-prioritised or just thrown away? To help answer this question, Jeff Gothelf put together the Hypothesis Prioritisation Canvas (Gothelf, J., The hypothesis prioritization canvas, 2019):

The hypothesis prioritization canvas helps facilitate an objective conversation with your team and stakeholders to determine which hypotheses will get your attention and which won’t (Gothelf, J., 2019)
The hypothesis prioritization canvas helps facilitate an objective conversation with your team and stakeholders to determine which hypotheses will get your attention and which won’t (Gothelf, J., 2019)

Design and Conduct Tests

You test the most important hypothesis with appropriate experiments. Each experiment generates evidence and insights that allow you to learn and decide. Based on the evidence and your insights you either adapt your idea, if you learn that you were in the wrong path, or continue testing other aspects of your ideas, if the evidence supports your direction (Bland, D. J., & Osterwalder, Testing Business Ideas: A Field Guide for Rapid Experimentation, 2019).

A successful project is not deemed successful because it is delivered accordant to a plan, but because it stood the test of reality.

“Walk the walk” in The decision maker’s playbook. Mueller, S., & Dhar, J. (2019)

By building, measuring and learning, designers are able to get closer to great user experiences sooner rather than later (Gothelf, J., & Seiden, J., Lean UX: Applying lean principles to improve user experience, 2013)

Without learning, you risk delivering a product or service no one finds valuable.

Gothelf, J., & Seiden, J., “The Sense and Repond Model” in Sense and respond (2017)

Experiments replace guesswork, intuition and best practices with knowledge. Experimentation is at the hear of what software developers call agile development. Rather than planning all activities up-front and then sequentially, agile development emphasises running many experiments and learning from them. Applying this tactic has a number of benefits (Mueller, S., & Dhar, J., The decision maker’s playbook, 2019):

  • It allows you to focus on actual outcomes: a successful project is not deemed successful because it is delivered according to a plan, but because it stood the test of reality.
  • It decreases re-work: because the feedback cycles are short, potential errors or problems are spotted quickly and can be smoothed out faster than conventional planning.
  • It reduces risks: because of increased transparency throughout the implementation process, risks can be better managed than in conventional project.
Experiment Canvas in Design a better business: New tools, skills, and mindset for strategy and innovation (Van Der Pijl, P., Lokitz, J., & Solomon, L. K., 2016)
Experiment Canvas in Design a better business: New tools, skills, and mindset for strategy and innovation (Van Der Pijl, P., Lokitz, J., & Solomon, L. K., 2016)

What’s important to understand is that testing rarely means just building a smaller version of what you want to sell. It’s not about building, nor selling something. It’s about testing the most important assumptions, to show this idea could work. And that does not necessarily require building anything for a very long time. You need to first prove that there’s a market, that people have the jobs pains and gains and that they’re willing to pay (Bland, D. J., & Osterwalder, A., Testing business ideas, 2020).

Any quantifying and qualifying set of tools or frameworks should have a component of Learning and Deciding  (Bland, D. J., & Osterwalder, A.,Testing business ideas, 2020)
Learn and Decide in Testing business ideas (Bland, D. J., & Osterwalder, A., 2020)

Learn and Decide

Learning faster than everyone else is no longer enough. You need to put that learning into action, because what you’ve learned has an expiration date: both markets and technology move so quickly that the insights you’ve gained can expire within months, weeks or even days. You should take action as (Bland, D. J., & Osterwalder, A., Testing business ideas, 2020):

  • Next steps to make progress with testing and de-risking a business ideas.
  • Informed decisions based on collected insights.
  • Decisions to abandon, change, and/or continue testing a business ideas.
With the Learning Card, you can systematically capture insights from your experiments and turn them into actionable learnings in four steps. These two tools paired together will supercharge your experiments and help you find the right business model and value proposition for your idea.
Leaning Card in Testing business ideas (Bland, D. J., & Osterwalder, A., 2020)

Pivot and Risk Mitigation

In a previous post, I mentioned that — more often than not — is not for the lack of ideas that teams cannot innovate, but because of all the friction or drag created by not having a shared vision and understanding of what the problems they are trying to solve. It has become a personal rally cry for me to help teams create shared understanding.

Shared understanding is the collective knowledge of the team that builds over time as the team works together. It’s a rich understanding of the space, the product, and the customers.

“Creating Shared Understanding” in Lean UX: Applying lean principles to improve userexperience, Gothelf, J., & Seiden, J. (2013)

It’s been my experience that — left to chance — it’s only natural that teams will stray from vision and goals. Helping teams paddle in the same direction requires not only good vision and goals, but also leadership, and intentional facilitation. All the collaboration that goes into creating shared understanding can help mitigate the risk of teams straying away.

beach bench boardwalk bridge as a visual representation of long term vision
Learn more about creating product vision in Strategy and The Importance of Vision (Photo by Pixabay on Pexels.com)

Because collaboration can help with situations where there is a lot of unknowns, it can be helpful to plan for time to investigate and “de-risk” situations, not just create solutions. It’s worth asking (Anderson, G., Mastering Collaboration, 2019):

  • How much risk is there in finding the solution?
  • How possible is it that we’ll develop ideas that fail?
  • If we do fail, how bad are the consequences for users and the company?

Assessing risk requires a solid understanding of the risks and benefits involved. Common problems include information paralysis — the result gathering too much data and overanalyses. Determine how much data is really needed initially, and then fine-tune implementations with more data a a later stage (Kourdi, J., Business Strategy: A guide to effective decision-making, 2015).

To understand the risk and uncertainty of your idea you need to ask: “What are all the things that need to be true for this idea to work?” (Bland, D. J., & Osterwalder, A., Testing business ideas, 2020)

By turning instead to exploring what would have to true, teams go from battling one another to working together to explore ideas. Rather than attempting to bury real disagreements, this approach surfaces differences and resolve them, resulting in more-robust strategies and stronger commitment to them (Lafley, A.G., Martin, R. L., “Shorten Your Odds” in Playing to Win: How Strategy Really Works”, 2013).

Risk Analysis and Assessment

Once risks are identified they can be prioritised according to their potential impact and the likelihood of them occurring. This helps to highlight not only where things might go wrong and what their impact would be, but how, why and where these catalyst might be triggered (Kourdi, J., Business Strategy: A guide to effective decision-making, 2015):

  • Technology. New hardware, software or system configuration can trigger risks, as can new demand on existing information systems and technology.
  • Organisational Change. Risks are triggers by — for example — new management structures or reporting lines, new strategies and commercial agreements.
  • Processes. New product, markets and acquisitions all cause change and can trigger risks.
  • People. New employees, losing key people, poor succession planning, or weak people management can all create dislocation but the main danger is behaviour: everything from laziness to fraud, exhaustion and simple human error can trigger risks.
  • External factors. Change to regulation and political, economic or social developments can all affect strategic decisions by bringing to the surfaces risks that may have lain hidden.

Analyse risks at the start of each iteration (or test); reassess them regularly. For each identified risk, ask yourself (Podeswa, H., “Analyse Risk” in The Business Analyst’s Handbook. 2008):

  • Who owns the risk?
  • What is the likelihood of the risk occurring?
  • What is the impact on the business if it occurs?
  • What is the best strategy for dealing with this risk?
  • Is there anything that can be done to prevent it from happening or to mitigate (lessen) the damage if it does occur?
Risk Map in Mapping Project Risk & Uncertainty (Alkira Consulting, 2021)
Risk Map in Mapping Project Risk & Uncertainty (Alkira Consulting, 2021)

Design strategists should help stakeholders and teams think through:

  • Quantifying and mitigating risk of pursuing ideas / approaches
  • Creating strategies for pivot while pursuing an idea/approach
  • Identifying what signals they need to capture in order to know when to pivot.

Start by selecting the biggest risks: the uncertainty that must be address now so that you don’t take the product in the wrong direction and experience late failure (e.g.: figuring out a a late stage that you are building a product nobody really wants or needs). Next, determine how you can best address the risks — for instance, by observing target users, interviewing customers, or employing a minimum viable product (MVP). Carry out the necessary work and collect the relevant feedback or data. Then analyse the results and use the newly gained insights to decide if you should pivot, persevere or stop — if you should stick with your strategy, change it, or no longer pursue your vision and take the appropriate actions accordingly (Pichler, R., Strategize, 2016).

Whether you work for a small start-up or an existing large organization, validate your riskiest assumptions as quickly and cheaply as possible so you don’t waste valuable time and resources toiling away at something that likely will never work.
Riskiest Assumption Canvas in Design a better business: New tools, skills, and mindset for strategy and innovation (Van Der Pijl, P., Lokitz, J., & Solomon, L. K., 2016)

Iteratively reworking the product strategy encourages you to carry out just enough market research just in time to avoid too much or too little research, addressing the biggest risks first so that you can quickly understand which parts of your strategy are working and which are not, thus avoid late failure (Pichler, R., Strategize, 2016).

Pivot, Persevere, or Stop

Once you have collected the relevant feedback or data, reviewed and analysed it, ask yourself if your strategy is still valid: your initial product strategy may contain plenty of assumptions and risks, and you may as well discover that the strategy is wrong and does not work. If that is the case, then you have two choices (Pichler, R., Strategize, 2016):

  • stop and let go of your vision, or
  • stick with the vision and change the strategy, which is also called pivot.

Pivoting is attractive only if you pivot early, when the cost of changing direction is comparatively low.

Pichler, R., Strategize (2016)

You should therefore aim to find out quickly if anything is wrong if your strategy, and if you need to fail, then fail fast. While a late pivot can happen, you should avoid it, because the later it occurs, the more difficult and costly is is likely to be (Pichler, R., Strategize, 2016).

banking business checklist commerce
Learn more about the Cost of Changing Your Mind in Facilitating Good Decisions (Photo by Pixabay on Pexels.com)

By building, measuring and learning, designers are able to get closer to great user experiences sooner rather than later (Gothelf, J., & Seiden, J., Lean UX: Applying lean principles to improve user experience, 2013).

From this perspective, Risk Mitigation and Testing Business Ideas should go hand-in-hand.

Facilitating Investment Discussions

I’ve seen too many teams that a lot of their decisions seem to be driven by the question “What can we implement with least effort” or “What are we able to implement”, not by the question “what brings value to the user”.

Design strategists should help team find objective ways to value design ideas/ approaches/ solutions to justify the investment on them.

As I mentioned in a previous post, designers must become skilled facilitators that respond, prod, encourage, guide, coach and teach as they guide individuals and groups to make decisions that are critical in the business world though effective processes. There are few decisions that are harder than deciding how to prioritise. The mistake I’ve seen many designers make is to look at prioritisation discussion as a zero-sum game:

  • Our user centered design tools set may have focused too much on needs of the user, at the expense of business needs and technological constraints.
  • We need to point at futures that are both desirableprofitable, and viability (“Change By Design“, Brown, T., & Katz, B., 2009).

So the facilitation methods and approaches mentioned above should help you engage with the team to find objective ways to value design ideas/ approaches/ solutions to justify the investment on them. From that perspective, prioritisation does goes hand in hand with selecting alternatives.

It’s essential to set priorities and remove distractions so that people can get on with providing service to customers, thus increasing profits and the value of the business (Kourdi, J., Business Strategy: A guide to effective decision-making, 2015).

Priorities Make Things Happen

Berkun, S., Making things happen: Mastering project management (2008)

There are a few things you should ask yourself and/or the team when we keep coming revisiting and renegotiating the scope of work (DeGrandis, D., Making work visible: Exposing time theft to optimize workflow, 2017):

  • What is your prioritisation policy and how is it visualised? How does each and every item of work that has prioritised helps get us closer to our vision and achieve our goals?
  • How will you signal when work has been prioritised and is ready to be worked on? In other words — where is your line of commitment? How do people know which work to pull?
  • How will we visually distinguish between higher priorities and lower priority work?

If you have priorities in place, you can always ask questions in any discussion that reframe the argument around a more useful primary consideration. This refreshes everyone’s sense of what success is, visibly dividing the universe into two piles: things that are important and things that are nice, but not important. Here are some sample questions (Berkun, S., Making things happen: Mastering project management, 2008):

  • What problem are we trying to solve?
  • If there are multiple problems, which one is most important?
  • How does this problem relate to or impact our goals?
  • What is the simplest way to fix this that will allow us to meet our goals?

Prioritisation of Value and Desirability

From a user-centered perspective, the most crucial pivot that needs to happen in the conversation between designers and business stakeholders is the framing of value:

  • Business value
  • User value
  • Value to designers (sense of self-realisation? Did I impact someone’s life in a positive way?)

So how do you facilitate discussions that help teams clearly see value from different angles? I’ve found that alignment diagrams are really good to get the teams to have qualifying discussions around value. Here are some examples below:

Alignment Diagrams

Alignment diagrams to refer to any map, diagram, or visualization that reveals both sides (Business and Users) of value creation in a single overview. They are a category of diagram that illustrates the interaction between people and organizations (Kalbach, J., ”Visualizing Value: Aligning Outside-in” in Mapping Experiences, 2021).

Customer Journey Maps are visual thinking artifacts that help you get insight into, track, and discuss how a customer experiences a problem you are trying to solve. How does this problem or opportunity show up in their lives? How do they experience it? How do they interact with you? (Lewrick, M., Link, P., & Leifer, L., The design thinking playbook. 2018).

Experience Maps look at a broader context of human behavior. They reverse the relationship and show how the organization fits into a person’s life (Kalbach, J., ”Visualizing Value: Aligning Outside-in” in Mapping Experiences, 2021).

User story mapping is a visual exercise that helps product managers and their development teams define the work that will create the most delightful user experience. User Story Mapping allows teams to create a dynamic outline of a set of representative user’s interactions with the product, evaluate which steps have the most benefit for the user, and prioritise what should be built next (Patton, J.,  User Story Mapping: Discover the whole story, build the right product, 2014).

Opportunity Solution Trees are a simple way of visually representing the paths you might take to reach a desired outcome (Torres, T., Continuous Discovery Habits: Discover Products that Create Customer Value and Business Value, 2021)

Service Blueprints are visual thinking artifacts that help to capture the big picture and interconnections, and are a way to plan out projects and relate service design decisions back to the original research insights. The blueprint is different from the service ecology in that it includes specific detail about the elements, experiences, and delivery within the service itself (Polaine, A., Løvlie, L., & Reason, B., Service design: From insight to implementation, 2013).

Strategy Canvas help you compare how well competitors meet costumer buying criteria or desired outcomes. To create your own strategy canvas, list the 10-12 most important functional desired outcomes — or buying criteria — on the x-axis. On the y-ais, list the 3-5 most common competitors (direct, indirect, alternative solutions and multi-tools solutions) for the job. (Garbugli, É., Solving Product, 2020).

Learn more about Prioritisation (pen calendar to do checklist)
Learn more about Alignment Diagrams in Strategy and Prioritisation (Photo by Breakingpic on Pexels.com)

While Alignment Diagrams are good for facilitating discussions around qualifying value by bringing both Business and Users perspectives together, there is still a need for quantifying value objectively. Let’s look why.

Quantifying and Qualifying Value, Satisfaction and Desirability

When product managers, designers and strategists are crafting their strategy or working on discovery phase, the kind of user and customer insights they are looking for are really hard to acquire through quantitative metrics, either because we cannot derive insights from the existing analytics coming from the product, or because we are creating something new (so there are no numbers to refer to). Most of such insights (especially desirability and satisfaction) would come from preference data.

Preference data consists of the more subjective data that measures a participant’s feelings or opinions of the product.

Rubin, J., & Chisnell, D., Handbook of usability testing: How to plan, design, and conduct effective tests (2011)

Just because preference data is more subjective, it doesn’t meant is less quantifiable: although design and several usability activities are certainly qualitative, the image of good and bad designs can easily be quantified through metrics like perceived satisfaction, recommendations, etc (Sauro, J., & Lewis, J. R., Quantifying the user experience: Practical statistics for user research. 2016).

Preference Data is typically collected via written, oral, or even online questionnaires or through the debriefing session of a test. A rating scale that measure how a participant feels about the product is an example of a preference measure (Rubin, J., & Chisnell, D., Handbook of usability testing, 2011).

Now let’s look at some examples of preference data that design strategist can collect to inform strategic decisions.

Value Opportunity Analysis (VOA)

Value Opportunity Analysis (VOA) is an evaluative method that creates a measurable way to predict the success or failure of a product by focusing on the user’s point of view. The Value Opportunity Analysis (VOA) can happen at two stages throughout the design process (Hanington, B., & Martin, B., Universal methods of design, 2012):

  • VOA is typically used in the concept generation stage when prototyping is still low fidelity or even on paper.
  • It is also used at the launch stage, tested for quality assurance to determine market readiness.  An example could be testing a current design prior to investing on a redesign.
While quantifying and qualifying value, there are seven value opportunities in a Value Opportunity Analysis: Emotion (Adventure, Independence, Security, Sensuality, Confidence, Power), Aesthetics (Visual, Auditory, Tactile, Olfactory, Taste) Identity ( Point in time, Sense of place, Personality), Impact (Social, Environmental), Ergonomics (Comfort, Safety, Ease of use) Core Technology (Reliable, Enabling) Quality (Craftsmanship, Durability)
Value Opportunity Analysis in  Universal methods of design (Hanington, B., & Martin, B., 2012)

There are seven value opportunities (Hanington, B., & Martin, B., Universal methods of design, 2012):

  1. Emotion: Adventure, Independence, Security, Sensuality, Confidence, Power
  2. Aesthetics: Visual, Auditory, Tactile, Olfactory, Taste
  3. Identity: Point in time, Sense of place, Personality
  4. Impact: Social, Environmental
  5. Ergonomics: Comfort, Safety, Ease of use
  6. Core Technology: Reliable, Enabling
  7. Quality: Craftsmanship, Durability
Usefulness, Satisfaction, and Ease of Use (USE)

The Usefulness, Satisfaction, and Ease of Use Questionnaire (USE, Lund, 2001) measures the subjective usability of a product or service. It is a 30-item survey that examines four dimensions of usability (Sauro, J., & Lewis, J. R., Quantifying the user experience. 2016):

  • Usefulness
  • Ease of use
  • Ease of learning
  • Satisfaction.
Example of USE questionnaire (from Journal of Otolaryngology)
American Customer Satisfaction Index (ACSI)

The Satisfaction level indicator is reliant on 3 critical 10-point scale questions to obtain customer satisfaction. These American Customer Satisfaction Index (ACSI) questions are categorized into (Fornell et al, The American Customer Satisfaction Index, 1996):

  • Satisfaction
  • Expectation Levels
  • Performance
While intended as a macroeconomic measure of U.S. consumers in general, many corporations have used the American Customer Satisfaction Index (ACSI) to quantifying and qualifying the satisfaction of their own customers
Example of ACSI Questionnaire in ACSI (American Customer Satisfaction Index) Score & Its Calculation (Verint Systems Inc, 2021)

While intended as a macroeconomic measure of U.S. consumers in general, many corporations have used the American Customer Satisfaction Index (ACSI) to quantifying and qualifying the satisfaction of their own customers.

System Usability Scale (SUS)

The System Usability Scale (SUS) consists of ten statements to which participants rate their level of agreement. No attempt is made to assess different attributes of the system (e.g. usability, usefulness, etc.): the intent is to look at the combined rating (Tullis, T., & Albert, W., Measuring the user experience. 2013).

The System Usability Scale (SUS) provides a “quick and dirty”, reliable tool for measuring the usability. It consists of a 10 item questionnaire with five response options for respondents; from Strongly agree to Strongly disagree.
System Usability Scale (SUS) questions in How Low Can You Go? Is the System Usability Scale Range Restricted? (Kortum, P., Acemyan, C. Z., (2013)
Usability Metric for User Experience (UMUX)

In response to the need for a shorter questionnaire, Finstad introduced the Usability Metric for User Experience (UMUX) in 2010. It’s intended to be similar to the SUS but is shorter and targeted toward the ISO 9241 definition of usability (effectiveness, efficiency, and satisfaction). It contains two positive and two negative items with a 7-point response scale. The four items are (Sauro, J., & Lewis, J. R., Quantifying the user experience. 2016):

  • [This system’s] capabilities meet my requirements.
  • Using [this system] is a frustrating experience.
  • [This system] is easy to use.
  • I have to spend too much time correcting things with [this system].
UMUX-Lite

To improve the UMUX, Lewis et al (“Measuring perceived usability: The SUS, UMUX-LITE, and AltUsability” in International Journal of Human-Computer Interaction31(8), 496–505, 2015) proposed a shorter all-positive questionnaire called the UMUX-Lite using the same 7-point scale with the following two items (Sauro, J., & Lewis, J. R., Quantifying the user experience. 2016):

  • [This system’s] capabilities meet my requirements.
  • [This system] is easy to use.
Net Promoter Score (NPS)

Within a broad field of methods for measuring satisfaction, one popular framework is the net promoter score. The simplicity of the method is its great advantage—customers are simply asked “How likely is it that you would recommend our company/ product to a friend or colleague?” This type of survey is relatively simple to conduct, and it is constant across companies and industries. This makes it easy for companies to compare their performance with the competition, and good net promoter scores have been documented to relate directly to business growth (Polaine, A., Løvlie, L., & Reason, B., Service design, 2013).

I mention NPS because it’s a well known metric, but I — and other design leaders — have reservations about it: there has been challenges to the claim of a strong relationship between NPS and the company growth. Also there is no well-defined method for computing confidence intervals around NPS (Sauro, J., & Lewis, J. R., Quantifying the user experience. 2016)

Watch “Seeing the Big Picture: The Development of an Experience Scorecard” with Bill Albert
Watch Bill Albert talk about how NPS and CSAT scores are not enough to measure the quality of the experience in Seeing the Big Picture: The Development of an Experience Scorecard
Desirability Testing

A Desirability Test is great for gauging first-impression emotional responses to product and services (Hanington, B., & Martin, B., Universal methods of design, 2012):

  • Explores the affective responses that different designs elicit form people based on first impressions.
  • Using index cards with positive, neutral and negative adjectives written on them, participants pick those that describe how they feel about a design or a prototype
  • Can be conducted using low-fidelity prototypes as a base-line before the team embarks on a redesign.

Participants are offered different visual-design alternatives and are expected to associate each alternative with a set of  attributes selected from a closed list.

Experience Sampling

Experience Sampling is a strategic research technique that answers a high-level business (or roadmap) question rather than evaluating a design or product that already exists. Experience Sampling is good for uncovering unmet needs, which will lead to generating great ideas for new products and for validating (or invalidating) ideas you already have (Sharon, T., Validating Product Ideas, 2016).

Learn more about Experience Sampling in Don’t Listen to Users: Sample their experience with Tomer Sharon

In an experience sampling study, research participant are interrupted several times a day or week to note their experience in real time. The key is to asking the same question ever and over again at random time during the day or work . This cadence and repetition strenghens your finding’s validity and allows you to identify patterns (Sharon, T., Validating Product Ideas, 2016).

Jobs To Be Done (JTBD) and Outcome-Driven Innovation

Outcome-Driven Innovation (ODI) is a strategy and innovation process built around the theory that people buy products and services to get jobs done. It links a company’s value creation activities to quantifying and qualifying customer-defined metrics. Ulwick found that previous innovation practices were ineffective because they were incomplete, overlapping, or unnecessary.

Outcome-Driven Innovation® (ODI) is a strategy and innovation process that enables a company to create and market winning product and service offerings with a success rate that is 5-times the industry average

Ulwick, A.,  What customers want: Using outcome-driven innovation to create breakthrough products and services (2005)

Clayton Christensen credits Ulwick and Richard Pedi of Gage Foods with the way of thinking about market structure used in the chapter “What Products Will Customers Want to Buy?” in his Innovator’s Solution and called “jobs to be done” or “outcomes that customers are seeking”.

UX Matrix: OPPORTUNITY SCORES
Clayton Christensen credits Ulwick and Richard Pedi of Gage Foods with the way of thinking about market structure used in the chapter “What Products Will Customers Want to Buy?” in his Innovator’s Solution and called “jobs to be done” or “outcomes that customers are seeking”.

Ulwick’s “opportunity algorithm” measures and ranks innovation opportunities. Standard gap analysis looks at the simple difference between importance and satisfaction metrics; Ulwick’s formula gives twice as much weight to importance as to satisfaction, where importance and satisfaction are the proportion of high survey responses.

You’re probably asking yourself “where these values come from?” That’s where User Research comes in handy: once you’ve got the List of Use Cases, you go back to your users and probe on how important each use case is, and how satisfied with the product they are with regards to each use case.

Once you’ve obtained the opportunity scores for each use case, what comes next? There are two complementary pieces of information that the scores reveal: where the market is underserved and where the it is overserved. We can use this information to make some important targeting and resource-related decisions.

Opportunity Scores: GRAPH
Plotting the Jobs-to-be-Done, in order to map where the market is underserved and where the it is overserved (Ulwick, A., What customers want: Using outcome-driven innovation to create breakthrough products and services, 2005)
The Importance versus Satisfaction Framework

Similar to Outcome-Driven Innovation, this framework proposes you should be quantifying and qualifying customers need that any particular feature of the product is going to address (Olsen, D. The lean product playbook, 2015):

  • How important is that?
  • Then how satisfied are people with the current alternatives that are out there?
Dan Olsen's framework proposes you should be quantifying and qualifying the customer need that any particular feature of the product is going to address: How important is that? Then how satisfied are people with the current alternatives that are out there? You want to build things that have high important needs with low satisfaction
Importance versus Satisfaction Quadrants in The lean product playbook (Olsen, D., 2015)

What I like about Olsen’s approach to assessing opportunities is that he created a couple of variations of opportunities scores:

  • Customer Value Delivered = Importance x Satisfaction
  • Opportunity to Add Value = Importance x (1 – Satisfaction)
  • Opportunity = Importance – Current Value Delivered
Kano Model

The Kano Model, developed by Dr. Noriaki Kano, is a way of classifying customer expectations into three categories: expected needs, normal needs, exciting needs. This hierarchy can be used to help with our prioritization efforts by clearly identifying the value of solutions to the needs in each category (“Kano Model” in Product Roadmaps Relaunched, Lombardo, C. T., McCarthy, B., Ryan, E., & Connors, M., 2017):

  • The customer’s expected needs are roughly equivalent to the critical path: if those needs are not met, they become dissatisfiers.
  • If you meet the expected needs, customers will start articulating normal needs, or satisfiers — things they don’t normally need in the product but will satisfy them.
  • When normal needs are largely met, then exciting needs (delighters or wows) go beyond the customers’ expectations.
“X axis: Investment; Y axis: Satisfaction” in Kano Model Analysis in Product Design

The Kano methodology was initially adopted by operations researchers, who added statistical rigor to the question pair results analysis. Product managers have leveraged aspects of the Kano approach in Quality Function Deployment (QFD). More recently, this methodology has been used by Agile teams and in market research (Moorman, J., “Leveraging the Kano Model for Optimal Results” in UX Magazine, 2012).

Learn more about Quantifying and Qualifying User Needs and Delight using the Kano Method (Jan Moorman: Measuring User Delight using the Kano Methodology)
Learn more about the Kano Method from Measuring User Delight using the Kano Methodology (Moorman, J., 2012)

Visibility and Traceability

It’s an old saying that what gets measured gets done. There is more than a little truth to this. If aspirations are to be achieved, capabilities developed, and management systems created, progress needs to be measured (“Manage What Matters” in Playing to Win: How Strategy Really Works (Lafley, A.G., Martin, R. L., 2013).

Measurement allows comparison of expected outcomes with actual outcomes and enables you to adjust strategic choices accordingly.

“Manage What Matters” in Playing to Win: How Strategy Really Works (Lafley, A.G., Martin, R. L., 2013)

It is crucial that designers engage with their business stakeholders to understand what objectives and unique positions they want their products to assume in the industry, and the choices that are making in order to achieve such objectives and positions.

Six Strategic Questions, adapted from "Strategy Blueprint" in Mapping Experiences: A Guide to Creating Value through Journeys, Blueprints, and Diagrams (Kalbach, 2020).
Six Strategic Questions, adapted from “Strategy Blueprint” in Mapping Experiences: A Guide to Creating Value through Journeys, Blueprints, and Diagrams (Kalbach, 2020)

If you clearly articulated the answer to the six strategic questions (what are our aspirations, what are our challenges, what will we focus, what our guiding principles, what type of activities), strategies can still fail — spectacularly — if you fail to establish management systems that support those choices. Without the supporting systems, structures and measures for quantifying and qualifying outcomes, strategies remains a wish list, a set of goals that may or may not ever be achieved (“Manage What Matters” in Playing to Win: How Strategy Really Works (Lafley, A.G., Martin, R. L., 2013).

Learn more about how to create great choices (aisle architecture building business)
Learn more about how to create great choices in Strategy and the Art of Creating Choices (Photo by Pixabay on Pexels.com)

Although design and several usability activities are certainly qualitative, the image of good and bad designs can easily be quantified in conversation, completion rates, completion times, perceived satisfaction, recommendations and sales (Sauro, J., & Lewis, J. R., Quantifying the user experience: Practical statistics for user research. 2016).

Use Cases Lists: Pugh Matrix

The UXI Matrix is a simple, flexible, tool that extends the concept of the product backlog to include UX factors normally not tracked by agile teams. To create a UX Integration Matrix, you add several UX-related data points to your user stories (Innes, J., Pugh Matrix in Integrating UX into the product backlog, 2012)

Pugh Matrix helps us visualise the complete backlog and facilitates prioritisation discussions while quantifying and qualifying outcomes.
Pugh Matrix in Integrating UX into the product backlog (Innes, J., 2012)

The UXI Matrix helps teams integrate UX best practices and user-centered design by inserting UX at every level of the agile process:

  • Groom the backlog: During release and sprint planning you can sort, group, and filter user stories in Excel.
  • Reduce design overhead: if a story shares several personas with another story in a multi-user system, then that story may be a duplicate. Grouping by themes can also help here.
  • Facilitate Collaboration: You can share it with remote team members. Listing assigned staff provides visibility into who’s doing what (see the columns under the heading Staffing). Then team members can figure out who’s working on related stories and check on what’s complete, especially if you create a hyperlink to the design or research materials right there in the matrix.
  • Track user involvement and other UX metrics: It makes it easier to convince the team to revisit previous designs when metrics show users cannot use a proposed design, or are unsatisfied with the current product or service. Furthermore, it can be useful to track satisfaction by user story (or story specific stats from multivariate testing) in a column right next to the story.
Use Case List (also known as PUGH MATRIX) is great tool for quantifying and qualifying outcomes by bringing visibility to the number and the status of the backlog.
Click on the image to see an example of Use Case List: PUGH MATRIX

I’ve created Use Cases Lists (or Pugh Matrix), which is decision matrix to help evaluate and prioritize a list of options while working with Product Management and Software Architecture teams in both AutoCAD Map3D and AutoCAD Utility Design projects to first establish a list of weighted criteria, and then evaluates each use case against those criteria, trying to take the input from the different stakeholders of the team into account (user experience, business values, etc).

Using the Outcome-driven Innovation Framework above, you can prioritize the Use Cases based on their Opportunities Scores.

Objectives, Goals, Strategy, and Measures (OGSM)

Objectives, Goals, Strategy, and Measures (OGSM) is a simple, clear expression of a strategy, a living document that everyone in the business knows and understands (“Manage What Matters” in Playing to Win: How Strategy Really Works (Lafley, A.G., Martin, R. L., 2013).

OGSM table example in "Manage What Matters" (Playing to Win: How Strategy Really Works, Lafley, A.G., Martin, R. L., 2013).
OGSM table example from “Manage What Matters” in Playing to Win: How Strategy Really Works (Lafley, A.G., Martin, R. L., 2013).

The main benefit of the OGSM is that it helps management refrain from setting convoluted targets. By restricting the plan to a single page, the OGSM sharpens employees’ focus and is an effective reference tool for direction in times of uncertainty and decision dilemmas (“Manage What Matters” in Playing to Win: How Strategy Really Works (Lafley, A.G., Martin, R. L., 2013).

Value Stream Mapping

Value Stream Mapping is a practical and highly effective way to learn to see and resolve disconnects, redundancies, and gaps in how work gets done. Value Stream Mapping Transformation Plans include (Martin, K., & Osterling, M., Value stream mapping, 2014):

  • Measurable outcomes
  • Clear ownership
  • Projected start and end dates for each improvement
  • Real-time status for tracking transformation
Value Stream Mapping Transformation Plans are essential tools for quantifying and qualifying outcomes by tracking the execution improvement.
Value Stream Mapping Transformation Plans are essential tools in executing improvement (Martin, K., & Osterling, M., Value stream mapping, 2014)

The Right Time for Quantifying and Qualifying

You might be asking yourself “These are all great, but when should I be doing what?”. Without knowing what kind of team set up you have, and what kinds of processes you run in your organization, the best I can do is to map all of the techniques above the the Double Diamond framework.

The Double Diamond Framework

Design Council’s Double Diamond clearly conveys a design process to designers and non-designers alike. The two diamonds represent a process of exploring an issue more widely or deeply (divergent thinking) and then taking focused action (convergent thinking).  

  • Discover. The first diamond helps people understand, rather than simply assume, what the problem is. It involves speaking to and spending time with people who are affected by the issues.
  • Define. The insights gathered from the discovery phase can help you to define the challenge in a different way.
  • Develop. The second diamond encourages people to give different answers to the clearly defined problem, seeking inspiration from elsewhere and co-designing with a range of different people.
  • Deliver. Delivery involves testing out different solutions at small-scale, rejecting those that will not work and improving the ones that will.
Design Council’s framework for innovation also includes the key principles and design methods that designers and non-designers need to take, and the ideal working culture needed, to achieve significant and long-lasting positive change.
 A clear, comprehensive and visual description of the design process in What is the framework for innovation? (Design Council, 2015)

Map of Quantifying and Qualifying Activities and Methods

Process Awareness characterises a degree to which the participants are informed about the process procedures, rules, requirements, workflow and other details. The higher is process awareness, the more profoundly the participants are engaged into a process, and so the better results they deliver.

In my experience, the biggest disconnect between the work designers need to do and the mindset of every other team member in a team is usually about how quickly we tend — when not facilitated — to jump to solutions instead of contemplate and explore the problem space a little longer.

Map of Quantifying and Qualifying Activities in the Double Diamond (Discover, Define, Develop and Deliver)

Knowing when team should be diverging, when they should be exploring, and when they should closing will help ensure they get the best out of their collective brainstorming and multiple perspectives’ power and keep the team engaged.

Quantifying and Qualifying Activities during “Discover”

Here are my recommendations for suggested quantifying and qualifying activities and methods:

Quantifying and Qualifying Activities during “Define”

Here are my recommendations for suggested quantifying and qualifying activities and methods:

Quantifying and Qualifying Activities during “Develop”

Here are my recommendations for suggested quantifying and qualifying activities and methods:

Quantifying and Qualifying Activities during “Deliver”

Here are my recommendations for suggested quantifying and qualifying activities and methods:

Facilitate Quantifying and Qualifying Discussions

I’m of the opinion that designers — instead of complaining that everyone else is jumping too quickly into solutions — should facilitate the discussions and help others raise the awareness around the creative and problem solving process.

I’ll argue for the Need of Facilitation in the sense that — if designers want to influence the decisions that shape strategy — they must step up to the plate and become skilled facilitators that respond, prod, encourage, guide, coach and teach as they guide individuals and groups to make decisions that are critical in the business world though effective processes.

That said, my opinion is that facilitation here does not only means “facilitate workshops”, but facilitate the decisions regardless of what kinds of activities are required.

photo of people near wooden table
Learn more about becoming a skilled facilitator (Photo by fauxels on Pexels.com)

Recommended Reading

Anderson, G. (2019). Mastering Collaboration: Make Working Together Less Painful and More Productive. O’Reilly UK Ltd.

Berkun, S. (2008). Making things happen: Mastering project management. Sebastopol, CA: O’Reilly Media.

Bland, D. J., & Osterwalder, A. (2020). Testing business ideas: A field guide for rapid experimentation. Standards Information Network.

Brown, T., & Katz, B. (2009). Change by design: how design thinking transforms organizations and inspires innovation. [New York]: Harper Business

Christensen, C. M., & Raynor, M. E. (2013). The innovator’s solution: Creating and sustaining successful growth. Boston, MA: Harvard Business Review Press.

Croll, A., & Yoskovitz, B. (2013). Lean Analytics: Use Data to Build a Better Startup Faster. O’Reilly Media.

DeGrandis, D. (2017). Making work visible: Exposing time theft to optimize workflow. Portland, OR: IT Revolution Press.

Design Council. (2015, March 17). What is the framework for innovation? Design Council’s evolved Double Diamond. Retrieved August 5, 2021, from designcouncil.ork.uk website: https://www.designcouncil.org.uk/news-opinion/what-framework-innovation-design-councils-evolved-double-diamond

Fornell, C., Johnson, M. D., Anderson, E. W., Cha, J., & Bryant, B. E. (1996). The American customer satisfaction index: Nature, purpose, and findings. Journal of Marketing60(4), 7.

Garbugli, É. (2020). Solving Product: Reveal Gaps, Ignite Growth, and Accelerate Any Tech Product with Customer Research. Wroclaw, Poland: Amazon.

Gothelf, J. (2019, November 8). The hypothesis prioritization canvas. Retrieved April 25, 2021, from Jeffgothelf.com website: https://jeffgothelf.com/blog/the-hypothesis-prioritization-canvas/

Gothelf, J., & Seiden, J. (2013). Lean UX: Applying lean principles to improve user experience. Sebastopol, CA: O’Reilly Media.

Gothelf, J., & Seiden, J. (2017). Sense and respond: How successful organizations listen to customers and create new products continuously. Boston, MA: Harvard Business Review Press.

Govindarajan, V., & Trimble, C. (2010). The other side of innovation: Solving the execution challenge. Boston, MA: Harvard Business Review Press.

Hanington, B., & Martin, B. (2012). Universal methods of design: 100 Ways to research complex problems, develop innovative ideas, and design effective solutions. Beverly, MA: Rockport.

Innes, J. (2012, February 3). Integrating UX into the product backlog. Retrieved July 28, 2021, from Boxesandarrows.com website: https://boxesandarrows.com/integrating-ux-into-the-product-backlog/

Kalbach, J. (2020), “Mapping Experiences: A Guide to Creating Value through Journeys, Blueprints, and Diagrams“, 440 pages, O’Reilly Media; 2nd edition (15 December 2020)

Kourdi, J. (2015). Business Strategy: A guide to effective decision-making. New York, NY: PublicAffairs

Kortum, P., & Acemyan, C. Z. (2013). How Low Can You Go? Is the System Usability Scale Range Restricted? Journal of Usability Studies9(1), 14–24. https://uxpajournal.org/wp-content/uploads/sites/7/pdf/JUS_Kortum_November_2013.pdf

Lafley, A.G., Martin, R. L., (2013), “Playing to Win: How Strategy Really Works”, 272 pages, Publisher: Harvard Business Review Press (5 Feb 2013)

Lewis, J. R., Utesch, B. S., & Maher, D. E. (2015). Measuring perceived usability: The SUS, UMUX-LITE, and AltUsability. International Journal of Human-Computer Interaction31(8), 496–505.

Lewrick, M., Link, P., & Leifer, L. (2018). The design thinking playbook: Mindful digital transformation of teams, products, services, businesses and ecosystems. Nashville, TN: John Wiley & Sons

Lockwood, T., “Design Value: A Framework for Measurement” in Building Design Strategy: Using Design to Achieve Key Business Objectives, Lockwood, T., Walton, T., (2008); Allworth Press; 1 edition (November 11, 2008)

Lombardo, C. T., McCarthy, B., Ryan, E., & Connors, M. (2017). Product Roadmaps Relaunched. Sebastopol, CA: O’Reilly Media.

Lund, A. M. (2001). Measuring usability with the USE questionnaire. Usability Interface, 8(2), 3-6 (www.stcsig.org/usability/newsletter/index.html).

Martin, K., & Osterling, M. (2014). Value stream mapping: How to visualize work and align leadership for organizational transformation. New York, NY: McGraw-Hill Professional.

Moorman, J., (2012), “Leveraging the Kano Model for Optimal Results” in UX Magazine, captured 11 Feb 2021 from https://uxmag.com/articles/leveraging-the-kano-model-for-optimal-results

Mueller, S., & Dhar, J. (2019). The decision maker’s playbook: 12 Mental tactics for thinking more clearly, navigating uncertainty, and making smarter choices. Harlow, England: FT Publishing International.

Olsen, D. (2015). The lean product playbook: How to innovate with minimum viable products and rapid customer feedback (1st ed.). Nashville, TN: John Wiley & Sons.

Patton, J. (2014). User Story Mapping: Discover the whole story, build the right product (1st ed.). Sebastopol, CA: O’Reilly Media.

Pichler, R. (2016). Strategize: Product strategy and product roadmap practices for the digital age. Pichler Consulting.

Podeswa, H. (2008). The Business Analyst’s Handbook. Florence, AL: Delmar Cengage Learning.

Polaine, A., Løvlie, L., & Reason, B. (2013). Service design: From insight to implementation. Rosenfeld Media.

Rodden, K., Hutchinson, H., & Fu, X. (2010). Measuring the user experience on a large scale: User-centered metrics for web applicationsProceedings of the 28th International Conference on Human Factors in Computing Systems – CHI ’10. New York, New York, USA: ACM Press.

Rubin, J., & Chisnell, D. (2011). Handbook of usability testing: How to plan, design, and conduct effective tests (2nd ed.). Chichester, England: John Wiley & Sons.

Sauro, J., & Lewis, J. R. (2016). Quantifying the user experience: Practical statistics for user research (2nd Edition). Oxford, England: Morgan Kaufmann.

Sharon, T. (2016). Validating Product Ideas (1st Edition). Brooklyn, New York: Rosenfeld Media.

Torres, T. (2021). Continuous Discovery Habits: Discover Products that Create Customer Value and Business Value. Product Talk LLC.

Tullis, T., & Albert, W. (2013). Measuring the user experience: Collecting, analyzing, and presenting usability metrics (2nd edition). Morgan Kaufmann.

Ulwick, A. (2005). What customers want: Using outcome-driven innovation to create breakthrough products and services. Montigny-le-Bretonneux, France: McGraw-Hill.

Van Der Pijl, P., Lokitz, J., & Solomon, L. K. (2016). Design a better business: New tools, skills, and mindset for strategy and innovation. Nashville, TN: John Wiley & Sons.

By Itamar Medeiros

I'm a Strategist, Branding Specialist, Experience Designer, Speaker, and Workshop Facilitator based in Germany, where I work as Director of Design Strategy and Systems at SAP and visiting lecturer at Köln International School of Design of the Cologne University of Applied Sciences.

Working in the Information Technology industry since 1998, I've helped truly global companies in several countries (Brazil, China, Germany, The Netherlands, Poland, The United Arab Emirates, United States, Hong Kong) create great user experience through advocating Design and Innovation principles.

During my 7 years in China, I've promoted the User Experience Design discipline as User Experience Manager at Autodesk and Local Coordinator of the Interaction Design Association (IxDA) in Shanghai.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.