Categories
Design Strategy Talks & Workshops User Experience

Building Trustworthy Experiences: A Primer of AI Principles for Design and Strategy

Let’s look at how trust is built in AI-powered products. We’ll discuss the different phases of the user’s journey, suggesting ways to design trustworthy AI experiences.

In the digital age, where artificial intelligence (AI) powers a myriad of products and services, building trust with users has become paramount. Many users are hesitant to embrace solutions that rely on AI due to concerns about privacy, reliability, and transparency. To address these concerns, designers and strategists must adhere to a set of principles to create great experiences.

In this blog post, we will take a closer look at how trust is built in AI-powered products. We will examine three critical phases: before using the product, the product experience itself, and the off-boarding phase. Additionally, we will provide some recommendations on building trustworthy experiences.

The insights shared in this article stem from my 25 years of experience as an interaction designer and my concerns regarding the ethics of design and technology, rather than coming from an AI expert. I believe that the principles of building trust for AI are comparable to those applied by designers and strategists when engaging with stakeholders, therefore I make most of my references from psychology, design strategy and business decisions literature.

I hope that this article serves as a guide, bringing together valuable perspectives to help you create AI experiences that inspire trust. Keep in mind that your specific situation and needs will shape how you apply these principles, but they provide a solid foundation for building trustworthy AI interactions.

TL;DR;

  • Create great first impressions. People are more tolerant of minor usability issues when the design of a product or service is aesthetically pleasing
  • Provide clear and immediate value. Demonstrate back to the user how sharing their information within the product will benefit them.
  • Try a little tenderness. Behavior change put people into a vulnerable place. You have an opportunity to react to that vulnerability with sensitivity and respect within the design of your product.
  • Design for failure. Always incorporate fail-safes.
  • Anthropomorphize Like It’s Going Out of Style. Giving your product human characteristics can quickly get users to trust because there’s nothing that people trust more than their friends.
  • Not Too Human. If technology is too human, it becomes creepy.
  • Design for Control and Transparency. Offer people opportunities to shape how they use the technology (“open the black box”); make it available to inspection; let them set their own goals.
  • Deliver on the promise. Keeping users informed about what it is doing, what stage in the operations it has reached, and whether things are proceeding smoothly or not, then they are not surprised by the results.
  • Strike a balance between the amount of information a system should have about its users and the level of understanding users need about how the system works to operate it confidently.
  • Have clear and consistent ethical values built into the system, ensuring your system is free of cognitive biases and real-life prejudices while minimizing “Moral Crumple Zones”.

Before Using the Product

Before a user even interacts with an AI-powered product, their perception is shaped by factors such as branding, reputation, and word-of-mouth. To establish a foundation of trust:

  • Transparency: Clearly communicate how AI is used in the product, its benefits, and limitations. Avoid using technical jargon that might alienate users.
  • Privacy Assurance: Highlight your commitment to user privacy. Explain the measures taken to safeguard user data and how data is used.
  • Educational Content: Provide resources like FAQs or video tutorials to help users understand the technology and its impact on their experience.

Why trust is important

In the realm of AI-powered products, building trust starts well before users even engage with the technology. Amy Bucher’s insights from Engaged: Designing for Behavior Change offer a valuable framework for understanding the multifaceted nature of trust and its significance in the pre-engagement phase.

When it comes to technology, trust is everything: users who don’t trust in a product or the people behind it will be reluctant users at best. Most likely, they’ll be no users.

Bucher, A., “A Matter of Trust: Design Users Can Believe In” in Engaged: Designing for Behavior Change (2020)

Trust serves as a cornerstone for fostering positive user perceptions and encouraging users to embrace AI-driven solutions with confidence.

Trust in Human Support and Capability

When users embark on their journey with an AI-powered product, it’s essential to extend trust beyond the technology itself. Bucher’s observation that trust extends to live human beings involved in supporting behavior change resonates strongly here. This underscores the importance of emphasizing the role of customer support, product experts, and other human agents who provide assistance.

If there are live human beings involved in supporting the behavior change, trust extended to them as well — that they work in good faith and are capable of promised support.

Bucher, A., “A Matter of Trust: Design Users Can Believe In” in Engaged: Designing for Behavior Change (2020)

Designers and strategists should consider the following strategies to build trust in human support:

  1. Visible and Accessible Support Channels: Clearly communicate the availability of human support channels. Provide avenues for users to ask questions, seek guidance, and share concerns, both within the product interface and on associated platforms.
  2. Empathetic Engagement: Train support teams to engage empathetically with users. Understanding user pain points and providing genuine assistance contributes to users’ confidence in both the product and the human support they receive.
  3. Consistent Communication: Ensure that user inquiries are addressed promptly and consistently. Transparency about response times and availability of support hours is crucial for fostering trust.

Confidence in Product Efficacy

Amy Bucher’s observation that trust is tied to confidence in a product’s functionality directly applies to AI-driven products. Users need to believe that the product is capable of delivering on its promises and facilitating meaningful behavior change.

Trust includes confidence that the product works. In the case of a behavior change product, users who trust a product believe it offers a legitimate protocol that is appropriate for people like them.

Bucher, A., “A Matter of Trust: Design Users Can Believe In” in Engaged: Designing for Behavior Change (2020)

To instill this confidence, designers and strategists can adopt the following practices:

  1. Clear Value Proposition: Present a clear and compelling value proposition that outlines how the product addresses users’ needs. Highlight its AI-powered features and their potential impact on users’ lives.
  2. Appropriate Protocols: As Bucher suggests, users need to trust that the product offers a legitimate and appropriate protocol for behavior change. This requires transparent communication about the science, research, and methodologies that underpin the product’s recommendations.
  3. User-Centered Design: Develop the product with a deep understanding of the target audience’s preferences, pain points, and behaviors. Incorporate user feedback and iterate the product design to align with users’ expectations.

The Science Behind It

Continuing to draw from Amy Bucher’s insights from Engaged: Designing for Behavior Change, the importance of establishing credibility by highlighting scientific foundations and experts resonates strongly in the pre-engagement phase of AI-powered products. By showcasing the expertise behind the product, designers and strategists can instill confidence and build anticipation among potential users.

Expertise and Credibility

In the world of AI-powered products, users seek assurance that the technology is not only cutting-edge but also grounded in established knowledge and expertise. Highlighting the experts and institutions associated with the product can significantly enhance its credibility.

Here’s how to leverage this insight to build trust according to Amy Bucher’s Engaged: Designing for Behavior Change:

  1. Institutional Affiliations: If the behavior change product is linked to renowned research centers, universities, or institutions, showcase these affiliations. This connection underscores the product’s credibility and aligns it with authoritative sources.
  2. Expert Profiles: Introduce users to the researchers, scientists, and professionals who have contributed to the product’s development. Share their credentials, areas of expertise, and contributions to the field. This personal connection fosters trust.
  3. Research Insights: Offer glimpses into the scientific studies, research papers, or publications that have informed the product’s design. This demonstrates a commitment to evidence-based practices and builds trust in the product’s effectiveness.

Satisfied Customers and Testimonials

User testimonials play a pivotal role in validating the effectiveness of behavior change products. They serve as social proof, assuring potential users that others have successfully benefited from the product. According to Amy Bucher’s Engaged: Designing for Behavior Change, designers and strategists can leverage testimonials to create anticipation and trust:

  1. Diverse Testimonials: Showcase a variety of testimonials from different users, highlighting various aspects of the product’s impact. This diversity provides a well-rounded view of the product’s potential benefits.
  2. Concrete Outcomes: Encourage users to share specific outcomes, achievements, or improvements they experienced through the product. Concrete examples resonate more effectively with potential users.
  3. Visual Testimonials: Incorporate visuals such as photos or videos of satisfied customers sharing their experiences. Visual content enhances authenticity and relatability.

Anticipating Positive Outcomes

By incorporating these strategies into the pre-engagement phase, designers and strategists can set the stage for building trustworthy experiences. Highlighting scientific foundations, experts, and satisfied customers builds anticipation and establishes the product’s credibility before users even begin interacting with it. This proactive approach not only fosters trust but also encourages potential users to see the value and benefits that await them, making them more likely to engage with the product with confidence and enthusiasm.

Ethical Values

Transitioning from the insights of Amy Bucher to broader ethical considerations, the trustworthiness of AI-powered products hinges on the infusion of ethical values into the machines by their creators. Drawing from thought leaders such as Marcus and Davis, Agrawal, Gans, Goldfarb, Wilson, and Dougherty, it’s evident that ethical AI requires understanding and integrating human values, eliminating biases, and acknowledging underlying responsibility.

Data Privacy and Protection as Human Rights

In most countries, the law does not force criminal suspects to self. incriminate. There is something perverse about making people complicit in their own downfall. A federal judge in California banned police from forcing suspects to swipe open their phones because it is analogous to self-incrimination.36 And yet we tolerate innocent netizens being forced to give up their personal data, which is then used in all sorts of ways contrary to their interests.
We should protect netizens at least as much as we protect criminal suspects.

Our personal data should not be used as a weapon against our best interests.

Véliz, C., Privacy is power: Why and how you should take back control of your data (2020)

Privacy and Data Protection, though connected, are commonly recognised all over the world as two separate rights. In Europe, they are considered vital components for a sustainable democracy (European Data Protection Supervisor, 2023):

  • In the EU, human dignity is recognised as an absolute fundamental right. In this notion of dignity, privacy or the right to a private life, to be autonomous, in control of information about yourself, to be let alone, plays a pivotal role. Privacy is not only an individual right but also a social value, recognised as a universal human right, while data protection is not – at least not yet.
  • Data protection is about protecting any information relating to an identified or identifiable natural (living) person, including names, dates of birth, photographs, video footage, email addresses and telephone numbers. Other information such as IP addresses and communications content – related to or provided by end-users of communications services – are also considered personal data.
  • Privacy and data protection are two rights enshrined in the EU Treaties and in the EU Charter of Fundamental Rights.

The General Data Protection Regulation (GDPR) lists the rights of the data subject, meaning the rights of the individuals whose personal data is being processed. These strengthened rights give individuals more control over their personal data, including through (European Council, The general Data Protection Regulation, 2022):

  • the need for an individual’s clear consent to the processing of his or her personal data
  • easier access for the data subject to his or her personal data
  • the right to rectification, to erasure and ‘to be forgotten’
  • the right to object, including to the use of personal data for the purposes of ‘profiling’
  • the right to data portability from one service provider to another

The regulation also lays down the obligation for controllers (those who are responsible for the processing of data) to provide transparent and easily accessible information to individuals on the processing of their data.

Infusing AI with Core Human Values

As AI technology evolves, it’s imperative to ensure that it respects and aligns with human values. Marcus and Davis’ assertion that AI should understand and respect a core set of human values underscores the importance of building AI systems that engage with users in ways that reflect ethical principles. This not only engenders trust but also facilitates harmonious human-AI interactions.

Any AI that interacts with Humans in open-ended ways should be required (by law) to understand and respect a core set of human values.

Marcus, G., & Davis, E., Rebooting AI: Building artificial intelligence we can trust (2020)

Marcus and Davis recognize that defining and implementing these core human values within AI systems is a complex task. However, they argue that it is essential for AI technologies to consider values such as fairness, transparency, accountability, privacy, and respect for human dignity. By doing so, AI systems can operate in a way that aligns with societal norms and expectations, fostering greater trust between humans and AI.

High-Level Expert Group on AI from the European Commission presented Ethics Guidelines for Trustworthy Artificial Intelligence, putting forward a set of 7 key requirements that AI systems should meet to be deemed trustworthy:

  • Human agency and oversight: AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights. At the same time, proper oversight mechanisms need to be ensured, which can be achieved through human-in-the-loop, human-on-the-loop, and human-in-command approaches
  • Technical Robustness and safety: AI systems need to be resilient and secure. They need to be safe, ensuring a fall back plan in case something goes wrong, as well as being accurate, reliable and reproducible. That is the only way to ensure that also unintentional harm can be minimized and prevented.
  • Privacy and data governance: besides ensuring full respect for privacy and data protection, adequate data governance mechanisms must also be ensured, taking into account the quality and integrity of the data, and ensuring legitimised access to data.
  • Transparency: the data, system and AI business models should be transparent. Traceability mechanisms can help achieving this. Moreover, AI systems and their decisions should be explained in a manner adapted to the stakeholder concerned. Humans need to be aware that they are interacting with an AI system, and must be informed of the system’s capabilities and limitations.
  • Diversity, non-discrimination and fairness: Unfair bias must be avoided, as it could could have multiple negative implications, from the marginalization of vulnerable groups, to the exacerbation of prejudice and discrimination. Fostering diversity, AI systems should be accessible to all, regardless of any disability, and involve relevant stakeholders throughout their entire life circle.
  • Societal and environmental well-being: AI systems should benefit all human beings, including future generations. It must hence be ensured that they are sustainable and environmentally friendly. Moreover, they should take into account the environment, including other living beings, and their social and societal impact should be carefully considered. 
  • Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes. Auditability, which enables the assessment of algorithms, data and design processes plays a key role therein, especially in critical applications. Moreover, adequate an accessible redress should be ensured

Marcus and Davis also highlight that embedding core human values into AI systems requires interdisciplinary collaboration, involving ethicists, social scientists, technologists, and policymakers. This collaborative effort is necessary to ensure that AI technologies are developed with an understanding of the broader societal implications and ethical considerations.

Eliminating Biases and Prejudices

In a previous post, I mentioned that in part we fail to make good decisions because of glitches in our thinking, including deep-seated biases that produce troubling lapses in logic. Each of us fall prey to these glitches to some degree, no matter how logical or open-minded we believe ourselves to be (Riel, J., & Martin, R. L., Creating great choices. 2017).

These glitches in our thinking — most often than not — makes you pay more attention to information that supports what you already believe or want to be true, while at the same time snubbing information that would challenge your current thinking. This type of thinking error leads you to (Griffiths, C., & Costi, M., The Creative Thinking Handbook: Your step-by-step guide to problem solving in business, 2019):

  • Go into denial and ignore blatant facts. You avoid asking tough questions and discount new information that might put your favourite ideas or theories to the test (confirmation bias).
  • Stop at the first ‘right’ answer and so miss out on a multitude of possible answers you could find if you bothered to look.
  • Get overly attached to pet ideas, even if they don’t turn out to be all that great.
  • Be made a fool of by your own expectations. You interpret the future based on what you expect to happen – and are caught off guard by what actually happens!
  • Avoid taking risks owing to fear of losing (loss aversion). Rather than being driven by what you can gain, you’re more worried about what you might lose. As a result, you sidestep exciting opportunities and rebuff innovative suggestions.
Cognitive Bias Codex in Every Single Cognitive Bias in One Infographic (Desjardins, J., 2021)

One way to avoid such traps is — obviously — beware of such biases. Here are a few to be aware of (Hammond, et al. The Hidden Traps in Decision Making, 2013):

  • The Anchoring Trap lead us to give disproportionate weight to the first information we receive
  • The Status-quo Trap biases us towards maitaining the current situation – even when better alternatives exist
  • The Sunk-Cost Trap inclines us to make choices in the way that justifies past choices, even when these were mistakes
  • The Confirming-Evidence Trap leads us to seek out information supporting an existing predilection and to discount opposing information.
  • The Framing Trap occurs when we misstate a problem, undermining the entire decision-making process
  • The Overconfidence Trap makes us overestimate the accuracy of our forecasts
  • The Prudence Trap leads us to be overcautious when we make estimates about uncertain events.
  • The Recallability Trap prompts us to give undue weight to recent, dramatic events
banking business checklist commerce
Strategy and Facilitating Good Decisions

Learn how designers and strategists can respond, prod, encourage, guide, coach and teach as they guide individuals and groups to make good decisions that are critical in the business (Photo by Pixabay on Pexels.com)

One way to avoid such traps is — obviously — to beware of such biases and keep asking questions.

When it comes to developing AI products, it is crucial that they are free from cognitive biases and real-life prejudices. Agrawal, Gans, and Goldfarb have pointed out that if biases seep into the decisions and recommendations made by AI, it can compromise the trustworthiness of these products. This could potentially harm users and perpetuate societal inequalities.

Any AI should be free of cognitive biases or real-life prejudices

Agrawal, A., Gans, J., & Goldfarb, A., Prediction machines: The simple economics of artificial intelligence (2018)

Agrawal, Gans, and Goldfarb highlight several key points related to eliminating biases and prejudices in AI systems:

  1. Data Bias: The authors emphasize that AI systems learn from historical data, which can contain inherent biases present in society. If the training data is biased, the AI system’s outputs may also reflect those biases, leading to unfair or discriminatory outcomes.
  2. Ethical Concerns: The presence of biases and prejudices in AI systems raises ethical concerns, as these systems can perpetuate and amplify societal inequalities. Addressing biases is crucial to ensure that AI technologies are fair and just in their decision-making.
  3. Algorithmic Fairness: The authors discuss the concept of algorithmic fairness, which involves designing AI algorithms to make decisions that are unbiased and do not discriminate against certain groups. Achieving algorithmic fairness requires careful consideration of how decisions are made and the potential impact on different demographic groups.
  4. Auditing and Transparency: Agrawal, Gans, and Goldfarb advocate for transparency and accountability in AI systems. They suggest that AI developers should regularly audit their systems to identify and rectify biases. Transparency about how AI systems arrive at decisions can help identify and address biases effectively.
  5. Diverse Development Teams: To combat biases and prejudices, the authors recommend building diverse development teams that can identify and address potential biases from different perspectives. Diverse teams are better equipped to consider a wide range of ethical and social implications during the AI development process.
  6. Regulation and Oversight: The authors suggest that regulatory frameworks should be developed to ensure that AI systems adhere to ethical standards and do not perpetuate biases. Proper oversight can help prevent negative consequences arising from biased AI applications.

In another post, I mentioned It’s been my experience that — left to chance — it’s only natural that teams will stray from vision and goals. Helping teams paddle in the same direction requires not only good vision and goals, but also leadership, and intentional facilitation. So — from this perspective — AI took take the role of facilitator for individuals and teams by asking questions.

Perhaps nothing is more important to exploration and discovery than the art of asking good questions. Questions are fire-starters: they ignite people’s passions and energy; they create heat; and they illuminate things that were previously obscure.

Gray, D., Brown, S., Macanufo, J., “Core Gamestorming Skills” in Gamestorming (2010)

In order to establish and sustain trust, those who develop AI technology must make a concerted effort to eradicate biases, promote impartiality, and encourage meaningful conversations that enable users to recognize their own biases.

To encourage such meaningful conversations, the facilitator or — in this case — the AI could trigger team discussions with appropriately timed, thought-provoking questions before they make any big decision.

While nobody plans to complicate their life with bad decisions, far too many people don’t plan not to: well-placed, appropriately timed, thought-provoking questions helps us think through our decisions.

Stanley, A., Better decisions, fewer regrets: 5 Questions to help you determine your next move (2020)

Questions that decision-makers should ask themselves (Kahneman, D., Lovallo, D., & Sibony, O., “The Big Idea: Before You Make That Big Decision” in HBR’s 10 must reads on making smart decisions, 2013):

  • Check for the self-interest bias: is there any reason to suspect the team making the recommendation of errors motivated by self-interest. Review with extra-care, especially for overoptimism.
  • Check for the affect heuristic: has the team fallen in love with its proposal? Rigorously apply all the quality controls on this checklist.
  • Check for groupthink: were there dissenting opinions within the team? Were they explored adequately? Solicit dissenting views, discreetly if necessary.

Questions that decision makers should ask the recommenders (Kahneman, D., Lovallo, D., & Sibony, O., “The Big Idea: Before You Make That Big Decision” in HBR’s 10 must reads on making smart decisions, 2013):

  • Check for salience bias: could the diagnosis be overly influenced by an analogy to a memorable success. Ask for more analogies, and rigorously analyse their similarity to the current situation.
  • Check for the confirming bias: are credible alternatives included along with the recommendation? Request additional options.
  • Check for availability bias: if you had to make this decision again in a year’s time, what information would you want, and can you get more of it now? Use checklists of the data needed for each kind of decision.
  • Check for anchoring bias: do you know where the numbers came from? Can there be unsubstantiated number? Extrapolation from history? A motivation to use a certain anchor? Reanchor with figures generate by other models or benchmarks and request for new analysis.
  • Check for the halo effect: is the team assuming that a person, organisation, or approach that is successful in one area will be just as successful in another? Eliminate false inferences, and ask the team to seek additional comparable examples.
  • Check for the sunk-cost fallacy, endowment effect: are the recommenders overly attached to a history of past decisions? Consider the issue as if you were a new CEO.
banking business checklist commerce
Strategy and the Art of Asking Questions

Learn more about how to ask good questions that ensure teams are making good decisions (Photo by Burak K on Pexels.com)

Responsibility and Moral Crumple Zones

In the book Human + Machine: Reimagining Work in the Age of AI by Paul R. Dougherty and H. James Wilson, the concept of “moral crumple zones” refers to situations in which the ethical implications of an action or decision are unclear or ambiguous, creating potential ethical dilemmas. The term is used metaphorically to describe areas where traditional human decision-making processes can be easily compromised when humans collaborate with AI or automated systems.

In the context of AI and automation, the authors argue that as technology plays a larger role in decision-making, there is a need to consider and address ethical concerns that may arise. The term “moral crumple zones” suggests that, like the crumple zones in a car designed to absorb energy in a collision, there are areas where ethical concerns might be absorbed or overlooked due to the complexity of human-AI interactions.

The concept of minimizing “Moral Crumple Zones” reinforces the idea that — while AI can offload certain tasks — the ultimate responsibility for their outcomes lies with the creators and managers.

While management can offload certain activities to AI, it can’t offload underlying responsibility for how they are administered.

Wilson, H. J., & Dougherty, P. R., Human + machine: Reimagining work in the age of AI (2018)

Designers and developers cannot relinquish the ethical accountability that comes with AI deployment. This perspective reinforces the importance of robust oversight and ethical decision-making throughout the product lifecycle.

Moral Crumples: Driverless cars
We must change rules and institutions while promoting innovation to protect people from faulty AI. (Maliha, G., Parikh, R. B., Who Is Liable When AI Kills? 2022) Credit: Sergeii Aremenko/Science Photo Library/ Getty Images

The authors highlight that AI can sometimes lead to unintended ethical consequences because its decision-making processes are not always transparent or aligned with human values. The term underscores the importance of designing AI systems in ways that prevent them from inadvertently making unethical decisions, and also to ensure that human oversight and accountability remain present to handle situations that might fall into these “moral crumple zones.”

Product’s First Impressions

The initial interaction sets the tone for the entire user experience. To establish trust from the start:

  • Simplicity: Design a clean and intuitive interface that doesn’t overwhelm users with AI-related complexities.
  • Personalization: Show how AI enhances personalization without compromising user privacy. Allow users to adjust their settings and preferences.
  • Clear Value Proposition: Clearly demonstrate how the AI-powered features solve real problems or enhance user experience.

The initial interactions users have with an AI-powered product significantly influence their perception and level of trust. In this crucial phase, the concept that “people use ‘Look and Feel’ as their first indicator of trust,” as highlighted by Sillence, Briggs, Harris, and Fishwick in their research, holds immense significance. Design factors, such as color, font, navigation, and overall aesthetics, play a pivotal role in overcoming the initial trust rejection and establishing a solid foundation of trust.

The Power of First Impressions

Sillence et al.’s research underscores the rapid and instinctual nature of users’ decisions when assessing a website’s or digital platform’s trustworthiness. In this “trust rejection” phase, users quickly decide whether a site or product is credible enough to explore further. In the context of AI-powered products, this concept translates to the importance of the product’s first impressions, which influence whether users will engage or abandon it.

Design Factors and Trust Rejection

The “Look and Feel” of an AI-powered product is pivotal in determining whether it passes through the initial trust rejection phase. Here’s how designers and strategists can leverage this insight to foster trust:

  1. Aesthetic Consistency: Ensure that the design elements, including color palettes, fonts, and visual hierarchy, are consistent and aligned with the product’s branding. Inconsistencies can raise suspicion and hinder trust.
  2. User-Friendly Navigation: Intuitive navigation is crucial in enabling users to explore the product seamlessly. A cluttered or confusing interface may lead to a swift rejection of trust.
  3. Professional Design: Invest in professional and polished design that reflects the product’s sophistication and reliability. Amateurish design may trigger doubts about the product’s legitimacy.

Content and Credibility Factors

Once an AI-powered product successfully navigates the initial trust rejection phase, content and credibility become decisive factors in solidifying user trust:

  1. Quality Content: Provide clear and concise content that outlines the product’s value proposition and functionality. Ambiguity or overly technical language can erode trust.
  2. Credible Information: Back up the product’s claims with accurate and credible sources. Any information provided should be transparent and easily verifiable.
  3. User Testimonials: As previously discussed, testimonials from satisfied users can contribute to the product’s credibility, reassuring new users about its effectiveness.

Navigating the Trust Rejection Phase

In the realm of AI-powered products, users’ first impressions significantly impact their trust decisions. Sillence et al.’s research highlights the importance of design factors in overcoming the initial trust rejection phase. By focusing on aesthetic consistency, user-friendly navigation, and professional design, designers and strategists can create a visually appealing and inviting environment that encourages users to explore further.

Once past this phase, content and credibility become the bedrock of trust establishment. Designers should ensure that the content is informative, transparent, and backed by credible sources. By carefully crafting the product’s appearance and substantiating its claims, designers can set the stage for meaningful engagement, encouraging users to move beyond initial skepticism and embrace the AI-powered product with increasing trust and confidence.

Beautiful is Usable

The concept of “Beautiful is Usable” holds — in my opinion — a profound influence on users’ perception of an AI-powered product during the initial interaction phase. Research by Kurosu and Kashimura (1995), Tractinsky et al. (2000), and Ashby and Isen (1999) collectively underscores the impact of aesthetic appeal on users’ perception of usability and effectiveness.

An aesthetically pleasing design creates a positive response in people’s brains and leads them to believe the design actually works better.

Kurosu, M.,Kashimura, K., Apparent Usability vs. Inherent Usability: Experimental Analysis on the Determinants of the Apparent Usability (1995)

Designing an aesthetically pleasing interface not only creates a positive response but also influences users’ willingness to trust and engage with the product.

Aesthetic Pleasure and Usability

Kurosu and Kashimura’s findings accentuate that an aesthetically pleasing design induces positive responses in users’ brains, leading them to believe that the design is more effective. This suggests that the visual appeal of an AI-powered product’s interface has the potential to influence users’ perception of its usability and trustworthiness positively.

Tolerance of Usability Issues

Tractinsky et al.’s research highlights that users tend to be more tolerant of minor usability issues when the design is visually pleasing. In this context, users might overlook minor inconveniences or hiccups in the user experience if the product’s design captivates them aesthetically. This further reinforces the importance of creating a visually appealing first impression to mitigate initial trust barriers.

Potential Masking of Usability Problems

Ashby and Isen’s insights suggest that visually pleasing design can mask usability problems and potentially hinder the discovery of issues during usability testing. While aesthetics can create a positive aura, they shouldn’t overshadow the need for rigorous testing and optimization to ensure the product functions seamlessly and meets user needs.

There are many designers, many design schools, who cannot distinguish prettiness from usefulness. Off they go, training their students to make things pleasant: façade design, one of my designer friends calls it (disdainfully, let me emphasize). True beauty in a product has to be more than skin deep, more than a façade. To be truly beautiful, wondrous, and pleasurable, the product has to fulfill a useful function, work well, and be usable and understandable (Norman, D., Emotional Design: Why We Love (or Hate) Everyday Things, 2005).

Striking the Balance

Incorporating the “Beautiful is Usable” principle into the design of AI-powered products demands a balance between aesthetics and functionality.

Good design means that beauty and usability are in balance. An object that is beautiful to the core is no better than one that is only pretty if they both lack usability.

Norman, D., Emotional Design: Why We Love (or Hate) Everyday Things (2005)

Designers and strategists must:

  1. Not Overlook Usability: While aesthetics are crucial, usability and functionality should never take a back seat. Ensure that the product not only looks pleasing but also operates seamlessly.
  2. Test Rigorously: Despite aesthetic appeal, usability testing remains essential to uncover potential issues. Continuously gather user feedback and make iterative improvements to enhance trustworthiness.
  3. Visual Consistency: Maintain a consistent and coherent visual design that aligns with the product’s branding. Inconsistencies can disrupt the aesthetic experience and erode trust.

Aesthetics and Trust Formation

In the realm of AI-powered products, aesthetics play a pivotal role in forming first impressions and influencing users’ perception of trust and usability. By crafting visually pleasing designs that captivate users’ attention, designers can create a positive initial response and foster a sense of anticipation. However, the allure of aesthetics should be balanced with usability, testing, and optimization to ensure that the product not only appears trustworthy but truly functions effectively. In embracing the “Beautiful is Useful” concept, designers pave the way for building trustworthy experiences that extends beyond mere visual appeal.

The Product Experience

The core experience is where the user witnesses the AI in action. Here, building trust requires a delicate balance between automation and human touch:

  • Reliability: Ensure the AI’s accuracy and consistency to prevent frustration and misinformation.
  • User Control: Empower users with the ability to influence or override AI recommendations, fostering a sense of control.
  • Real-time Feedback: Provide explanations for AI-generated outcomes, helping users understand the reasoning behind suggestions.

Create Simple Conceptual Models

In Don Norman’s book Emotional Design: Why We Love (or Hate) Everyday Things, the concept of “Create Simple Conceptual Models” pertains to the importance of designing products and systems to allow users to easily understand how they work and how to interact with them. A conceptual model is a mental representation that users develop to understand how a product or system functions.

Norman emphasizes that creating simple conceptual models is essential for building trust and user confidence in a product’s functionality. When users can form a clear mental model of how a product works, they are more likely to feel in control and trust that the product will perform as expected.

Users’ understanding of how a product works and the feedback they receive from it is crucial in building and maintaining trust.

Norman, D., Emotional Design: Why We Love (or Hate) Everyday Things (2005)

Keep users informed about what is doing, what stage in the operations it has reached, and whether things are proceeding smoothly or not, then they are not surprised by the results (Norman, D., Emotional Design: Why We Love (or Hate) Everyday Things, 2005)

By creating simple conceptual models, designers can enhance user experience, foster trust, and minimize confusion or frustration. When users can form accurate mental models of how a product operates, they are more likely to engage with it confidently and effectively.

Reliable Performance and Trust

Norman’s concept of expecting an item to “perform precisely according to expectation” resonates strongly in the context of AI-powered products. Users place their trust in systems that reliably deliver expected outcomes. Designers and strategists must focus on ensuring that the product consistently meets users’ expectations, thereby reinforcing trust.

Creating simple and intuitive conceptual models helps users grasp the underlying processes, contributing to a stronger sense of trust, which include:

  1. Understandable Design: Products should be designed in a way that aligns with users’ mental models and expectations. Users should be able to easily grasp the relationships between different parts of a product or system.
  2. Feedback and Transparency: Users should receive clear feedback about the product’s status and operation. This helps them understand the cause-and-effect relationships between their actions and the system’s responses.
  3. Predictability: Designers should aim to make the product’s behavior predictable and consistent. Users should be able to anticipate how the product will respond to their interactions.
  4. Clarity: Avoid unnecessary complexity or ambiguity in the design. Keep the product’s functionality and operation straightforward and easy to understand.
  5. Visibility of System State: Users should have a clear view of the product’s current state and progress. This visibility helps users understand what the product is doing at any given moment.
  6. Simplicity: Strive for simplicity in the design of the product’s interactions and functionality. Simplified designs are often more intuitive and easier for users to comprehend.

User Understanding and Informed Trust

Norman’s emphasis on user understanding and feedback aligns with the principle that informed users are more likely to trust a product. Designers should ensure that users have a clear mental model of the product’s operations and outcomes:

  1. Educational Materials: Provide resources that explain the AI algorithms and processes in simple terms. Users who understand how the product functions are more likely to trust its recommendations.
  2. Progress Indicators: Implement progress indicators, loading animations, or visual cues that convey the stages of an operation. This helps users track the system’s progress and feel engaged in the process.
  3. Error Handling: Communicate errors or unexpected outcomes transparently. When users are kept informed about issues and provided with potential solutions, trust remains intact.

In the product experience phase, user understanding and clear feedback mechanisms play a vital role in building and maintaining trust in AI-powered products. By designing systems that align with users’ expectations, providing simple conceptual models, and offering consistent feedback, designers can create an environment where users feel informed, engaged, and confident in the product’s reliability. This focus on understanding and communication establishes a strong foundation for trust and encourages users to form lasting relationships with AI-powered systems.

Nurture Trust in Every Interaction

Continuing to explore trust within the product experience phase, the concept of nurturing trust in every interaction, as highlighted by Christopher Noessel in Designing Agentive Technology, unveils the delicate nature of trust-building in AI-powered products. Noessel’s perspective emphasizes that trust is cultivated over time through repeated interactions. Every successful interaction bolsters trust, while failures can swiftly erode it. Designing for trust requires a consistent commitment to reliability and user satisfaction throughout the user journey.

Incremental Trust Building

Noessel’s insights underscore the gradual nature of trust development in AI-powered products. Users form their initial trust impressions during the early interactions, and every subsequent interaction either reinforces or challenges that trust. Designers must focus on building trustworthy experiences that consistently deliver reliable results to foster a sense of trustworthiness.

In a previous post, I mentioned that the level of trust in business relationships, whether internal with employees or colleagues or external with clients and partners, is the greatest determinant of success. The challenge is having a conceptual framework and analytical way of evaluating and understanding trust. Without the proper framework for evaluating trust, there’s no actionable way to improve our trustworthiness (Maister, D. H., Galford, R., & Green, C, The trusted advisor, 2021).

Trust must be earned and deserved. You must do something to give the other people the evidence on which they can base their decision on whether to trust you. You must be willing to give in order to get.

Maister, D. H., Galford, R., & Green, C, The trusted advisor (2021)

To understand how to build trust between people, I have found the Trust Equation to be very helpful. I believe that the principles of trust-building for AI are similar to those for designers and strategists as they interact with stakeholders.

The Trust Equation is a deconstructive, analytical model of trustworthiness that can be easily understood and used to help yourself and your organization. The Trust Equation uses four objective variables to measure trustworthiness. These four variables are best described as CredibilityReliabilityIntimacy, and Self-Orientation (Maister, D. H., Galford, R., & Green, C, The trusted advisor, 2021).

The Trust Equation
The Trust Equation is now the cornerstone of our practice: a deconstructive, analytical model of trustworthiness that can be easily understood and used to help yourself and your organization.

Let’s dig into each variable a bit more (Maister, D. H., Galford, R., & Green, C, The trusted advisor, 2021):

  • Credibility has to do with the words we speak. In a sentence, we might say, “I can trust what she says about intellectual property; she’s very credible on the subject.”
  • Reliability has to do with actions. We might say, “If he says he’ll deliver the product tomorrow, I trust him, because he’s dependable.”
  • Intimacy refers to the safety or security that we feel when entrusting someone with something. We might say, “I can trust her with that information; she’s never violated my confidentiality before, and she would never embarrass me.”
  • Self-orientation refers to the person’s focus. In particular, whether the person’s focus is primarily on him or herself, or on the other person. We might say, “I can’t trust him on this deal — I don’t think he cares enough about me, he’s focused on what he gets out of it.” Or more commonly, “I don’t trust him — I think he’s too concerned about how he’s appearing, so he’s not really paying attention.”

There are two important things about building trust. First, it has to do with keeping one’s self-interest in check. Second, trust can be won or lost very rapidly (Maister, D. H., Galford, R., & Green, C, The trusted advisor, 2021), which — again — seems to be at play during the interaction between humans and AI, especially when we talk about the unfair rollercoaster.

group of people sitting in front of table
Strategy and Stakeholder Management

Learn about about Strategy and Stakeholder Management (Photo by Rebrand Cities on Pexels.com

Trust as an Unfair Rollercoaster

The analogy of an “unfair rollercoaster” vividly captures the delicate balance of trust in AI interactions. Trust is painstakingly accumulated over numerous successful interactions, only to be rapidly eroded by even a few failures.

Trust is built slowly over many interactions, and it can all fall quickly with a few failures

Noessel, C., Designing Agentive Technology (2017)

This emphasizes the importance of minimizing user frustrations and disappointments to maintain a steady trajectory of trust.

Strategies to Nurture Trust

To implement the concept of nurturing trust in every interaction, designers and strategists can adopt the following strategies:

  1. Reliable Performance: Prioritize reliability in the product’s AI-driven functionalities. Consistently delivering accurate and beneficial outcomes reinforces users’ trust.
  2. Transparent Communication: Keep users informed about ongoing processes, changes in settings, and any potential algorithm adjustments. Transparency prevents surprises and confusion.
  3. Efficient Error Recovery: When failures occur, design efficient error recovery mechanisms that guide users back on track swiftly. Minimize the impact of failures on user trust.

Consistency and Long-Term Perspective

The principle of nurturing trust in every interaction requires a long-term perspective on user experience. Consistency in delivering positive outcomes, along with seamless error recovery, fosters a sense of reliability that users can count on.

The Fragile Nature of Trust

In the realm of AI-powered products, trust is a fragile yet invaluable asset that’s nurtured through consistent, reliable, and satisfying interactions. Designers must embrace the understanding that trust accumulates over time but can be shattered by a few failures. By prioritizing reliability, transparency, and effective error recovery, designers build an environment where users can place their trust confidently in AI systems. This dedication to maintaining trust ensures a positive and enduring user experience that extends beyond the immediate interactions, fostering lasting relationships between users and AI-powered products.

The Law of Simplicity

Expanding further on trust-building within the product experience phase, simplicity principles — as advocated by Maeda in The Laws of Simplicity — offers insightful perspectives on the relationship between simplicity, user trust, and the evolving role of AI in users’ lives.

How comfortable are users about the computer knowing how they think, and how tolerant they will be if (and when) the computer makes a mistake guessing their desires?

Maeda, J., The laws of simplicity (2020)

Maeda’s questions delve into users’ comfort levels, their ability to revert, and the extent to which they are willing to trust AI systems. The principles of simplicity resonate strongly in maintaining trust and enhancing the user experience.

The Dynamics of Simplicity and Trust

Maeda’s inquiries touch on several vital aspects of user trust and simplicity in AI-powered products:

  1. Computer Understanding: The extent to which users are comfortable with the computer’s understanding of their preferences and intentions reflects the evolving trust relationship between users and AI systems.
  2. Mistakes and Trust: The tolerance users have for AI mistakes highlights the fragility of trust. Mistakes, even when minor, can influence users’ trust perceptions.
  3. User Reliance: Users’ willingness to relax and entrust their experiences to AI systems reflects the depth of trust established. The more users trust the system, the more they’ll lean back and allow it to guide their experiences.

Simplicity and Trust-Enhancing Principles

To apply the “Law of Simplicity” and bolster user trust, designers and strategists can consider the following principles:

  1. Transparency in AI Understanding: Clearly communicate how the AI system understands users’ preferences and intentions. When users comprehend the process, trust in the system’s decision-making grows.
  2. Quick Reversion and Control: Design the product to allow users to quickly revert or adjust any AI-driven decisions. This sense of control cultivates a feeling of empowerment and trust.
  3. Gradual User Familiarity: Introduce AI functionalities gradually, allowing users to become familiar with and trust the system’s capabilities over time.
  4. User-Centric Data Usage: Prioritize users’ privacy and data security. Communicate how user data is utilized to build trust and assure users that their interests are respected.
  5. Balancing Knowledge: Strive for a balance between the system’s understanding of users and users’ understanding of the system. Avoid overwhelming users with technical details.

Simplicity as a Trust Catalyst

In the evolving landscape of AI-powered products, the “Law of Simplicity” takes on profound significance in fostering trust and enhancing user experiences. By adhering to principles that prioritize transparency, user control, and a gradual introduction of AI capabilities, designers cultivate an environment where users can comfortably entrust their experiences to AI systems. Simplicity not only streamlines interactions but also creates a sense of empowerment and understanding, ultimately building a foundation of trust that solidifies the relationship between users and AI-powered products

Algorithm Aversion

Further expanding on trust-building during the product experience phase, the notion of being “more forgiving with humans” delves into the psychological phenomenon of “Algorithm Aversion.”

People erroneously avoid algorithms after seeing them err.

Wilson, H. J., & Dougherty, P. R., “Algorithm Aversion” in Human + machine: Reimagining work in the age of AI (2018)

This aversion reflects people’s tendencies to lose confidence in algorithms more quickly than in human forecasters, even when both make the same mistake. This effect can be attributed to the innate human desire to trust fellow humans over machines, making it vital for AI-powered products to address this psychological bias.

The phenomenon of Algorithm Aversion, highlighted in Wilson and Dougherty’s work, illustrates the complexity of trust dynamics between humans and machines:

  1. Loss of Confidence: People’s rapid loss of confidence in algorithms, compared to human forecasters, stems from the inherent inclination to trust human judgment more than machine-generated outcomes.
  2. Desire for Human Reliability: The preference for human judgment over algorithms reflects a deep-seated desire for relatability and accountability.

To address the Algorithm Aversion phenomenon and build trust, designers and strategists can consider the following approaches:

  1. Guardrails for Unintended Outcomes: Empower managers or leadership with control over unintended outcomes that might arise from AI-generated recommendations. This provides a human touchpoint for oversight and control.
  2. Human Checkpoints: Incorporate visual outputs or insights into the AI’s inner workings that support explanations of its decisions. This helps users understand the rationale behind AI-generated recommendations and builds trust through transparency.

Navigating Algorithm Aversion

In the context of AI-powered products, addressing Algorithm Aversion is essential for building trust. By acknowledging the preference for human judgment and accountability, designers can integrate features that provide human oversight, explanations, and control over AI-generated outcomes. Emphasizing the collaborative nature of AI-human interactions and ensuring transparency in decision-making processes helps establish a balanced, trustworthy environment. In navigating Algorithm Aversion, designers mitigate inherent biases and create an ecosystem where users can comfortably embrace AI-powered recommendations with confidence and trust.

Balancing Human and Machine Contributions

Incorporating human oversight and explaining the inner workings of AI strikes a balance between human and machine contributions, catering to users’ innate preferences for human involvement:

  1. Human-Machine Collaboration: Emphasize that the AI augments human decision-making rather than replacing it entirely. This collaboration leverages the strengths of both humans and machines.
  2. Transparency in AI Decisions: Clearly communicate how AI-generated recommendations are arrived at. This transparency reduces ambiguity and fosters a sense of trust in the AI’s processes.

In the evolving landscape of AI-driven interactions, achieving a harmonious balance between human and machine contributions has emerged as a pivotal endeavor. The concept of the “missing middle,” as articulated in Davenport and Wilson’s book “Human + Machine,” encapsulates the profound potential that lies within enabling human-AI collaboration.

Daugherty and Wilson's "Missing Middle"
The range of hybrid activities called “The missing middle” (Wilson, H. J., & Dougherty, P. R., Human + machine: Reimagining work in the age of AI, 2018)

In the missing middle, humans work with smart machines to exploit what each party does best. Humans, for example, are needed to develop, train, and manage various Al applications. In doing so, they are enabling those systems to function as true collaborative partners. For their part, machines in the missing middle are helping people to punch above their weight, providing them with superhuman capabilities, such as the ability to process and analyze copious amounts of data from myriad sources in real time. Machines are augmenting human capabilities (Wilson, H. J., & Dougherty, P. R., Human + machine: Reimagining work in the age of AI, 2018)

This partnership not only reshapes business processes but also redefines the boundaries of performance and capability.

Enabling Human and AI Collaboration: Exploiting What Each Does Best

The “missing middle” signifies a transformational phase where humans and smart machines collaboratively exploit their unique strengths. While humans bring creativity, intuition, and contextual understanding to the table, AI systems provide the ability to process vast amounts of data with lightning speed and precision. By facilitating collaboration between the two, organizations tap into a dynamic synergy where each enhances the capabilities of the other.

Symbiotic Partners, Not Adversaries

In the realm of the “missing middle,” humans and machines cease to be adversaries competing for relevance. Instead, they forge symbiotic partnerships that propel both to higher echelons of performance. The role of humans shifts to developing, training, and managing various AI applications, effectively elevating these systems to function as genuine collaborative partners.

Augmenting Human Capabilities

One of the defining features of the “missing middle” is the augmentation of human capabilities by AI. These smart machines empower individuals with superhuman abilities, such as processing and analyzing vast streams of real-time data from diverse sources. This augmentation enables humans to make informed decisions at an unprecedented scale and speed, reshaping how industries operate and innovate.

Transforming Business Processes

Embracing the “missing middle” facilitates the reimagining of business processes. This symbiotic relationship encourages collaborative teams of humans and machines to work in tandem, revolutionizing the way tasks are performed and insights are gleaned. Beyond digital companies, even industries like mining are harnessing AI to manage complex machinery remotely, freeing human operators from hazardous conditions and unlocking valuable insights from sensor data.

A New Horizon of Performance and Innovation

As AI and humans move from being separate entities to collaborative partners, the “missing middle” holds the promise of a new horizon of performance and innovation. This phase reshapes industries, empowers individuals, and ushers in a future where humans and machines together pioneer uncharted territories of possibility. By embracing this concept, organizations not only leverage AI’s capabilities but also invigorate human potential, ultimately leading to a more dynamic and resilient future.

banking business checklist commerce
Strategy and Facilitating Good Decisions

Learn more about Facilitating Good Decisions (Photo by Pixabay on Pexels.com)

Ensuring Control and Minimizing Disruptions

Continuing to delve into trust-building within the product experience phase, the concept of Ensuring Control and Minimizing Disruptions aligns with the fundamental psychological principle of autonomy, as described by Mihaly Csikszentmihalyi in Flow: The Psychology of Optimal Experience.

Flow refers to a state of optimal engagement and immersion in an activity, where an individual is fully absorbed, focused, and in a state of heightened concentration.

Csikszentmihalyi, M., Flow: The psychology of optimal experience (2008)

During a flow state, individuals often experience a deep sense of enjoyment, fulfillment, and a loss of self-consciousness as they become fully immersed in the task at hand.

Having autonomy and control over one’s job is key to accessing the state of flow. That doesn’t mean you can only access flow if you are in control of every variable. Rather, people must be free to choose how to perform a particular task or achieve a goal and be trusted that they would come up with the best approach that any given situation requires.

Csikszentmihalyi, M., Flow: The psychology of optimal experience (2008)

Csikszentmihalyi’s research on flow highlights its positive impact on well-being and creativity. He suggests that individuals who experience flow regularly tend to report higher levels of happiness and life satisfaction. Flow can be experienced in various domains, including work, sports, arts, and hobbies. When creating AI-powered products, our goal should be to assist users in staying focused and in control, while also fostering trust and engagement. Providing autonomy to the user is key in achieving this.

Autonomy and Flow

In the realm of AI-powered products, trust is bolstered when users are granted autonomy and control over their interactions. By adhering to the principles of autonomy, flexibility, and transparency, designers enable users to navigate the product experience confidently, while also allowing them to maintain their individuality and expertise. In embracing this concept, designers create an environment where users can harness the benefits of AI-powered assistance while retaining their sense of control, ultimately fostering a strong foundation of trust and engagement

Csikszentmihalyi’s insights into flow emphasize the significance of autonomy and control in Building Trustworthy Experiences:

  1. Choice and Autonomy: Providing users with the freedom to choose how to perform tasks or achieve goals contributes to their sense of control. This autonomy fosters engagement and trust.
  2. Trust in Individual Approach: Users must trust that they possess the capabilities to determine the best approach for a given task. This trust empowers them to make decisions confidently.
Designing for Autonomy and Trust

To embody the principle of ensuring control and minimizing disruptions, designers and strategists can implement the following strategies:

  1. Flexible Interaction Paths: Design AI-powered products with multiple interaction paths, allowing users to choose the one that aligns best with their preferences and needs.
  2. Adaptive Customization: Offer customization options that enable users to tailor the product’s behavior to suit their requirements. This personalization enhances the feeling of control.
  3. Trust in User Expertise: Build user trust by conveying confidence in their ability to make informed decisions. Avoid overbearing recommendations that limit user agency.
  4. Transparency in Decision-Making: Clearly communicate how the AI system arrives at its recommendations. This transparency empowers users to assess the information and make confident decisions.

Making People Feel Comfortable

How can companies help users become more comfortable working with AI systems? This question becomes paramount as organizations strive to integrate AI seamlessly into their workflows while nurturing trust and familiarity among users.

Being able to visualize the way an AI-enabled system arrives at its decisions helps develop trust in the system: opening the black box so people can see inside.

Power, B., How to Get Employees to Stop Worrying and Love AI (2018)

The following strategies draw from Power’s insights and the IEEE Standards Association’s recommendations to facilitate employees’ comfort and trust in working with AI systems.

Visualizing AI Decision-Making

Power’s suggestion of visualizing the decision-making process of AI systems holds immense value in building trust and comfort:

  • Transparent Processes: Enable employees to visualize the steps AI-enabled systems take to arrive at decisions. Opening the “black box” provides insight into the rationale behind AI-generated recommendations, alleviating uncertainty and fostering trust (Power, B., 2018)
  • Balancing Transparency with Contextual Opacity: Understand that not all AI systems require complete transparency during operation, especially in contexts like emotional therapy. However, systems’ workings should remain available for inspection by responsible parties (IEEE Standards Association, 2019)

Strategies to Foster Comfort

Combining the insights from Power and the IEEE Standards Association, companies can adopt the following strategies:

  1. Educational Resources: Offer training and educational materials that help employees understand how AI systems function, make decisions, and contribute to their work processes.
  2. Interactive Visualization Tools: Develop visualization tools that enable employees to see how AI-driven recommendations are formulated. This transparency enhances trust by demystifying the decision-making process.
  3. Tailored Transparency: Customize transparency levels based on the nature of AI applications. For more sensitive contexts, emphasize the availability of the system’s workings for inspection while ensuring employees’ comfort with its operation.
  4. Clear Communication: Communicate the organization’s commitment to responsible AI deployment, emphasizing the ethical considerations and oversight in place to ensure AI systems are aligned with human well-being.

In the quest to help employees become more comfortable working with AI systems, organizations play a pivotal role in fostering transparency, familiarity, and trust. By offering visualizations of AI decision-making, tailoring transparency to context, and providing educational resources, companies empower employees to embrace AI systems confidently. Balancing transparency with the need for contextual opacity ensures responsible deployment while nurturing trust among employees. In prioritizing employee well-being and trust, organizations lay the foundation for a harmonious integration of AI into the workforce, where AI’s capabilities are harnessed to augment human potential and enhance productivity.

Machines with Common Sense

The concept of “machines with common sense” from the book Rebooting AI: Building artificial intelligence we can trust by Gary Marcus and Ernest Davis refers to the idea of creating AI systems that possess a level of understanding and reasoning akin to human common sense. The authors emphasize that for AI to be truly trustworthy and reliable, it needs to go beyond specialized tasks and statistical learning, and instead, develop a broader and deeper understanding of the world, similar to how humans possess innate common sense knowledge.

Gary Marcus and Ernest Davis argue that AI systems often lack the ability to reason about the world in ways that humans find intuitive. These systems might excel at narrow tasks, such as playing complex games or recognizing objects in images. Still, they struggle when faced with scenarios that require understanding the context, making inferences, and generalizing knowledge to new situations.

The authors contend that machines with common sense could handle unexpected or novel situations more effectively, as they would possess a foundational understanding of how the world works. This understanding would enable AI systems to fill in gaps in information, make reasonable predictions, and avoid errors that arise from relying solely on statistical patterns. In essence, machines with common sense would be better equipped to navigate the intricacies of real-world scenarios and make decisions that align with human expectations.

Ingredients for Machines with Common Sense

In their book Rebooting AI: Building artificial intelligence we can trust, Gary Marcus and Ernest Davis outline several ingredients they believe are necessary for building machines with common sense. These ingredients are crucial for enabling AI systems to better understand the world and exhibit reasoning abilities that align more closely with human common sense. Here are some key ingredients they discuss:

  1. Structured Knowledge: Marcus and Davis emphasize the importance of equipping AI systems with structured knowledge about the world. This knowledge goes beyond statistical patterns and includes facts, relationships, and contextual information that enable machines to reason about various domains. Incorporating structured knowledge allows AI systems to make inferences and fill in gaps of information more effectively.
  2. Causal Reasoning: AI systems should be capable of understanding cause-and-effect relationships. This involves being able to reason about how different events or factors are interconnected and how changes in one aspect can lead to certain outcomes. Causal reasoning enables machines to make more accurate predictions and handle complex scenarios.
  3. Contextual Intelligence: AI systems that comprehend context can generate results that align with users’ expectations. This contextual understanding bridges the gap between mere statistical analysis and human-like decision-making.
  4. Domain Independence: Machines with common sense should be able to apply their understanding across different domains and contexts. This means that the knowledge and reasoning abilities acquired in one area can be generalized and adapted to other situations, allowing AI systems to handle diverse scenarios.
  5. Incorporating Background Knowledge: The authors stress the importance of leveraging background knowledge that humans possess. This includes understanding everyday concepts, physical laws, and common-sense assumptions that underlie human reasoning. AI systems should be equipped with a similar level of foundational knowledge to make their decision-making more intuitive.
  6. Learning from Few Examples: Common sense machines should be able to learn from just a few examples rather than relying solely on vast amounts of data. This ability to generalize from limited data is crucial for handling novel situations and adapting to changing circumstances.
  7. Counterfactual Reasoning: AI systems should be capable of reasoning about counterfactual scenarios — imagining what might have happened if certain conditions were different. This type of reasoning allows machines to explore alternative outcomes and understand the consequences of different choices.
  8. Ethical and Moral Considerations: Machines with common sense should be designed to incorporate ethical and moral considerations. This involves being able to reason about ethical dilemmas, make decisions that align with human values, and avoid biased or harmful behaviors.
  9. Interaction with Humans: AI systems with common sense should be designed for effective interaction with humans. This includes understanding and responding to natural language, interpreting user intent accurately, and engaging in meaningful conversations that reflect a deeper understanding of context.

Machines with Common Sense and Trust

To address the algorithm aversion concerns, it is important to consider specific design factors when creating AI-powered products. This will help users to trust and feel more comfortable with these machines (Marcus, G., Davis, E., Rebooting AI: Building artificial intelligence, 2020):

  • Design for Failure: Acknowledge the limitations of anticipating every possible failure scenario. Incorporate backup systems that can take over when things go wrong unexpectedly.
  • Fail-Safes: Implement fail-safe mechanisms that activate in extreme scenarios, preventing catastrophic outcomes. These mechanisms act as a last line of defense against complete disasters.

As AI systems evolve toward possessing common sense and contextual understanding, the potential for building trust in AI-powered products reaches new heights. The interplay between statistical analysis and contextual intelligence ensures that AI systems align with human expectations and produce reliable outcomes. By integrating fail-safes and designing for failure, organizations foster an environment where machines with common sense can operate confidently while providing users with dependable and sensible results. In nurturing these qualities, organizations forge a path toward AI-powered products that seamlessly blend statistical precision with human-like comprehension, further enhancing the trustworthiness and effectiveness of AI systems.

"Designing for AI" with Emily Sappington
Watch “Creating Minimum Viable Intelligence” talk with Emily Sappington

In this talk at IxDA‘s Interaction’19, Emily Sappington explains the concept of Minimum Viable Intelligence of an AI product, and how designers can deliver a clear UX when solving problems efficiently.

Anthropomorphism

Anthropomorphism is the natural tendency of humans to infuse human-like characteristics into products, allowing users to relate, engage, and ultimately trust the technology.

In her book Engaged: Designing for Behavior Change, Amy Bucher discusses the role of anthropomorphism in building trustworthy experiences. By imbuing behavior change products with human-like qualities, users are more likely to relate to and trust the product. This can lead to increased engagement and willingness to follow the product’s recommendations for behavior change. Bucher also highlights the importance of using anthropomorphism ethically and transparently, ensuring that users understand the extent to which the product’s interactions are human-like.

While anthropomorphism holds immense potential to foster trust, it’s crucial to strike a balance between creating relatability and avoiding the eerie uncanny valley effect.

The Cautionary Note: Not Too Human

While anthropomorphism can enhance trust, Noessel’s cautionary note from Designing Agentive Technology emphasizes the importance of avoiding the “uncanny valley” effect. This effect occurs when a machine’s human-like attributes are not convincingly lifelike, leading to feelings of discomfort or unease in users.

The Uncanny Valley chart
The “Uncanny Valley” is a theory that explores the relationship between an object’s human-like appearance and a viewer’s emotional response. The concept was first introduced by Japanese roboticist Masahiro Mori in a 1970 essay. According to the theory, as an object’s human likeness increases, so does a viewer’s affinity for it. However, there is a threshold beyond which a viewer’s affinity for the object decreases. (Kendall, E., “The Uncanny Valley” in Encyclopaedia Britannica, 2022)

Harnessing Anthropomorphism for Trust

Both Noessel and Bucher emphasize that while anthropomorphism can enhance user trust and engagement, it should be employed thoughtfully:

  • Human-Like Characteristics: Infusing AI-powered products with human-like traits and behaviors resonates with users’ innate trust in human interactions.
  • Striking the right balance: Imbuing the system with human-like attributes, both also clearly communicating the system’s capabilities is crucial to building trustworthy experiences
  • Familiarity and Relatability: Anthropomorphism fosters a sense of familiarity, making users more likely to embrace the technology as a trustworthy companion.
  • Avoiding the Uncanny Valley: The uncanny valley effect arises when technology appears overly human-like but falls short of perfection, creating discomfort and mistrust.

Strategies for Effective Anthropomorphism

To successfully implement anthropomorphism and nurture trust, designers must consider the psychological implications of anthropomorphism and ensure that users do not develop unrealistic expectations or discomfort due to the presence of human-like qualities in AI interactions:

  1. Friendly Personification: Give AI systems friendly and approachable characteristics that align with users’ expectations of human interaction.
  2. Emotional Understanding: Design products to understand and respond to users’ emotional cues, fostering a sense of empathy and relatability.
  3. Visual and Verbal Cues: Utilize visual cues like avatars or verbal cues that mimic conversational patterns to create a more personable interaction.
  4. Contextual Appropriateness: Ensure that the level of anthropomorphism aligns with the product’s intended context. Technology that appears too human-like might trigger discomfort.

Anthropomorphism as a Trust Bridge

In the landscape of AI-powered products, the principle of anthropomorphism offers a unique avenue to cultivate trust by leveraging users’ innate trust in human-like interactions. By giving products relatable characteristics, designers create a sense of familiarity that bridges the gap between users and technology. However, the cautionary note to avoid the uncanny valley effect underscores the need to strike the right balance between creating trustworthy companions and veering into unsettling territory. By embracing anthropomorphism thoughtfully, designers harness the power of familiarity and human-like interactions to create products that users can trust, engage with, and rely on with confidence.

Off-boarding Experience

When users decide to leave or stop using the product, their final interactions can leave a lasting impression:

  • Data Portability: Allow users to easily export their data or content from the platform.
  • Farewell Interaction: Provide a personalized send-off message, expressing gratitude for their usage and offering options for future engagement.
  • Data Deletion: Assure users that their data will be deleted securely and promptly upon request.

In the realm of building trustworthy experiences, the significance of the off-boarding phase can’t be overstated. This phase encompasses the departure of users from your product and, more crucially, their ongoing perceptions of the trustworthiness of your brand. Let’s delve into the intricacies of creating an off-boarding experience that respects user privacy, aligns with data portability norms, and upholds the highest standards of transparency.

Data Portability: Empowering Users

A cornerstone of a trustworthy off-boarding experience is data portability. As users bid adieu to your product, they should retain the right to carry their data elsewhere. By offering simple and efficient methods for users to export their data in formats that are universally accessible, you demonstrate respect for their ownership and control over their information.

Farewell Interaction: A Thoughtful Goodbye

The last interaction with your product leaves a lasting impression. Consider crafting a farewell interaction that expresses gratitude for users’ trust while offering clear options for data handling. By guiding users through the off-boarding process, you enhance their perception of your brand’s commitment to their privacy and overall experience.

Data Deletion: Upholding Privacy Obligations

Perhaps the most critical aspect of the off-boarding experience is data deletion. In an era defined by stringent privacy regulations like GDPR, it’s imperative to adhere to policies that dictate the erasure of user data upon request. Establish a clear and straightforward process for users to initiate data deletion, reflecting your commitment to their rights and privacy.

GDPR and Beyond: Legal and Ethical Compliance

Navigating the labyrinth of data privacy regulations is essential in building trust. GDPR stands as a prime example of safeguarding user data rights. When crafting the off-boarding experience, ensure that your processes align seamlessly with GDPR’s requirements. Strive to exceed legal obligations and adopt ethical data handling practices that reassure users of their information’s security.

A Commitment to Trust and Respect

As users part ways with your AI product, the off-boarding experience encapsulates your commitment to trust, respect, and ethical responsibility. By embracing data portability and data deletion, you reinforce the principles of privacy and data protection as integral to human rights. Upholding these values not only aligns with legal obligations but also resonates with users on a deeper level, leaving a lasting impression of your brand’s dedication to their rights, dignity, and overall well-being

Recommendations for Creating Trustworthy Experiences

Bringing it all together, here are key recommendations for designers and strategists to building trustworthy experiences:

  • Create great first impressions: People are more tolerant of minor usability issues when the design of a product or service is aesthetically pleasing
  • Provide clear and immediate value: demonstrate back to the user how sharing their information within the product will benefit them.
  • Try a little tenderness: behavior change put people into a vulnerable place. You have an opportunity to react to that vulnerability with sensitivity and respect within the design of your product.
  • Design for failure: always incorporate fail-safes
  • Anthropomorphize Like It’s Going Out of Style: giving your product human characteristics can quickly get users to trust because there’s nothing that people trust more than their friends.
  • Not Too Human: If technology is too human, it becomes creepy
  • Control and Transparency: Offer people opportunities to shape how they use the technology (“open the black box”); make it available to inspection; let them set their own goals.
  • Deliver on the promise: keeping users informed about what it is doing, what stage in the operations it has reached, and whether things are proceeding smoothly or not, then they are not surprised by the results.
  • Strike a balance between the amount of information a system should have about its users and the level of understanding users need about how the system works to confidently operate it.
  • Have clear and consistent ethical values built into the system, ensuring your system is free of cognitive biases and real-life prejudices while minimizing “Moral Crumple Zones”

Building trust in AI-powered products requires a holistic approach that begins before users engage with the product and extends beyond their usage. By following these principles and building trustworthy experiences that prioritize transparency, personalization, reliability, and user control, designers and strategists can establish and maintain trust in the ever-evolving world of AI technology. In doing so, they can pave the way for meaningful and enduring relationships between users and their products.

Recommended Reading

Agrawal, A., Gans, J., & Goldfarb, A. (2018). Prediction machines: The simple economics of artificial intelligence. Boston, MA: Harvard Business Review Press.

Ashby, F. Gregory ; Isen, Alice M. & Turken, And U. (1999). A neuropsychological theory of positive affect and its influence on cognition. _Psychological Review_ 106 

Blanchard, K. (2018). Leading at a higher level: Blanchard on leadership and creating high performing organizations (Third Edition). Pearson Education.

Bucher, A. (2020), Engaged: Designing for Behavior Change. Rosenfeld Media (3 Mar. 2020).

Csikszentmihalyi, M. (2008). Flow: The psychology of optimal experience. New York, NY: HarperPerennial.

Frick, W. (2015, June). When Your Boss Wears Metal Pants. Harvard Business Review, 84–89.

IEEE (2019). Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems. Retrieved September 1, 2021, from The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems website: https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ead_v2.pdf

Kurosu, Masaaki; Kashimura, Kaori (1995). “Apparent Usability vs. Inherent Usability: Experimental Analysis on the Determinants of the Apparent Usability”. Conference Companion on Human Factors in Computing Systems. CHI ’95. New York, NY, USA: ACM: 292–293.

Lombardo, C. T., McCarthy, B., Ryan, E., & Connors, M. (2017). Product Roadmaps Relaunched. Sebastopol, CA: O’Reilly Media.

Lencioni, P. M. (2013). The five dysfunctions of a team, enhanced edition: A leadership fable. London, England: Jossey-Bass.

Maeda, J. (2020). The laws of simplicity. London, England: MIT Press.

Maister, D. H., Galford, R., & Green, C. (2001). The trusted advisor. New York, NY: Simon & Schuster

Marcus, G., & Davis, E. (2020). Rebooting AI: Building artificial intelligence we can trust. New York, NY: Vintage Books.

Meyer, E. (2014). The culture map: Breaking through the invisible boundaries of global business. New York, NY: PublicAffairs.

Noessel, C. (2017). Designing Agentive Technology. New York, USA: Rosenfeld Media.

Norman, D. A. (2005). “People, Places, and Things” in Emotional design: Why we love (or hate) everyday things (Paperback edition). New York, NY: Basic Books

Power, B. (2018, January 25). How to Get Employees to Stop Worrying and Love AI. Harvard Business Review. https://hbr.org/2018/01/how-to-get-employees-to-stop-worrying-and-love-ai  

Sillence E, Briggs P, Harris P, Fishwick L. A framework for understanding trust factors in web-based health advice. International Journal of Human-Computer Studies. 2006;64:697–713.

Tractinsky, N., Katz, A. S., & Ikar, D. (2000). What is beautiful is usable. Interacting with Computers, 13(2), 127–145.

Ulwick, A., (2005), “What Customers Want: Using Outcome-Driven Innovation to Create Breakthrough Products and Services”, 256 pages, McGraw-Hill Education; 1 edition (August 16, 2005)

Véliz, C. (2020). Privacy is power: Why and how you should take back control of your data. London, England: Penguin Random House.

Wilson, H. J., & Dougherty, P. R. (2018). Human + machine: Reimagining work in the age of AI. Boston, MA: Harvard Business Review Press.

By Itamar Medeiros

Originally from Brazil, Itamar Medeiros currently lives in Germany, where he works as VP of Design Strategy at SAP and lecturer of Project Management for UX at the M.Sc. Usability Engineering at the Rhein-Waal University of Applied Sciences .

Working in the Information Technology industry since 1998, Itamar has helped truly global companies in multiple continents create great user experience through advocating Design and Innovation principles. During his 7 years in China, he promoted the User Experience Design discipline as User Experience Manager at Autodesk and Local Coordinator of the Interaction Design Association (IxDA) in Shanghai.

Itamar holds a MA in Design Practice from Northumbria University (Newcastle, UK), for which he received a Distinction Award for his thesis Creating Innovative Design Software Solutions within Collaborative/Distributed Design Environments.

3 replies on “Building Trustworthy Experiences: A Primer of AI Principles for Design and Strategy”

[…] Human-Centered AI Ethics: As AI systems become more integrated into our lives, ethical considerations surrounding AI and machine learning are vital. Thought leaders should be at the forefront of discussions on the responsible use of AI and the ethical implications of AI design [learn more about it in Building Trustworthy Experiences: A Primer of AI Principles for Design and Strategy] […]

[…] Human-Centered AI Ethics: As AI systems become more integrated into our lives, ethical considerations surrounding AI and machine learning are vital. Thought leaders should be at the forefront of discussions on the responsible use of AI and the ethical implications of AI design [learn more about it in Building Trustworthy Experiences: A Primer of AI Principles for Design and Strategy] […]

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Exit mobile version