Designing great AI products — Building trust

Kore
Becoming Human: Artificial Intelligence Magazine
6 min readJan 16, 2023

--

The following post is an excerpt from my book ‘Designing Human-Centric AI Experiences’ on applied UX design for Artificial intelligence.

I’m a stickler about having Chai and biscuits in the morning. Every week I go to the neighborhood grocery store to buy a regular packet of Britannia Marie Gold biscuits. When paying at the counter, I don’t check prices; I simply hand out my card. The store owner keeps these biscuits stocked for me and sometimes even gives me a discount. Over the past few years, I have come to trust this little store.

Almost all successful human interactions are built on trust. You often spend more money on brands that you trust over cheaper alternatives, which is why companies spend a lot of money on building their brand. When you go to a doctor for a health checkup, the underlying assumption is that you can trust their diagnosis. Most of us prefer to deposit our savings with a trusted bank. We don’t make large purchases on websites we don’t trust. We tend to leave our kids to play in safe places with trustworthy people.

Without trust, it would be debilitating to perform even the simplest of tasks. Without trust, almost all systems break, and society would fall apart. Trust is the willingness to take a risk based on the expectation of a benefit¹. Trust in business is the expectation that the other party will behave with integrity. In a high trust environment, people are honest and truthful in their communication. There is a fair exchange of benefits, and people operate in good faith. There is adequate transparency into the workings of the system. Insufficient transparency leads to distrust. Clear accountability is established; people don’t play blame games. A successful team of people is built on trust, so is a team of people and AI.

A successful team of people is built on trust, so is a team of people and AI.

Trust in AI

We can use AI systems in organizations to generate predictions, provide insights, suggest actions, and sometimes even to make decisions. The output of AI systems can affect different types of stakeholders directly or indirectly. AI systems are not perfect. They are probabilistic and learn from past data. Sometimes they make mistakes which is why we require humans to oversee them. For AI products to be successful, people who use them or are affected by them need to trust these systems.

Components of user trust

When building trust with the users of your AI system, you are essentially trying to develop a good relationship. Your users need to have the right level of confidence when using or working alongside your AI. Following components contribute to user trust in your product:

  1. Competence
  2. Reliability
  3. Predictability
  4. Benevolence

Competence

Competence is the ability of the product to get the job done. Does it improve the experience or address the user’s needs satisfactorily? A good-looking product or one with many features that do not fulfill user needs is not competent. Strive for a product that provides meaningful value that is easy to recognize². Google search is an example of a competent product. It generally offers satisfactory results for your questions.

Reliability

Reliability indicates how consistently your product delivers on its abilities³. A reliable product provides a consistent, predictable experience that is communicated clearly. A product that performs exceptionally well one time and breaks down the next time is not reliable. Apple’s iPhone is an example of a reliable product. It might not carry all the features its competitors have, but you can reasonably trust the ones it does have.

Predictability

A predictable interface is necessary, especially when the stakes are high. If the user comes to your product to perform critical, time-sensitive tasks, like quickly updating a spreadsheet before a client presentation, don’t include anything in your UI that puts habituation at risk. A probabilistic AI-based solution that can break the user’s habit is not ideal in such cases. However, suppose you think that users have open-ended goals like exploration. In that case, you can consider a dynamic AI-based solution that sometimes breaks user habits, e.g., selecting a movie from dynamic AI-based suggestions.

Benevolence

Benevolence is the belief that the trusted party wants to do good for the user. Be honest and upfront about the value your users and your product will get out of the relationship. Patagonia is a clothing brand that makes jackets that does a great job with benevolence. While their products can be expensive, they encourage people to reuse and repair their Patagonia clothes, giving a percentage of their sales for environmental causes. The company is upfront about its value to the customer.

Trust calibration

AI can help people augment or automate their tasks. People and AI can work alongside each other as partners in an organization. To collaborate efficiently, your stakeholders need to have the right level of trust in your AI system.

Trust calibration. Users can overtrust the AI when their trust exceeds the system’s capabilities. They can distrust the system if they are not confident of the AI’s performance.

Users can sometimes over-trust or distrust your AI system, which results in a mismatch of expectations. Users may distrust your AI when trust is less than the system’s capabilities. They may not be confident about your AI’s recommendations and decide not to use them. Users rejecting its capabilities is a failure of the AI system. Over-trust happens when user trust exceeds the system’s capabilities, leading to users trusting an AI’s recommendation when they should be using their own judgment. For example, users over-trusting the suggestions of a stock prediction service can lead to a financial loss. Overtrust can quickly turn into distrust the next time this person uses the service.

Product teams need to calibrate user trust in the AI regularly. The process to earn user trust is slow, and it’ll require proper calibration of the user’s expectations and understanding of what the product can and can’t do.

How to build trust?

AI systems are probabilistic and can make mistakes. It is the job of product creators to build trustworthy relationships between the AI and its users. Building trust is not about being right all the time; it is about integrity, accepting mistakes, and actively correcting them. Users should be able to judge how much they can trust your AI’s outputs, when it is appropriate to defer to AI and when they need to make their own judgments. There are two essential parts to building user trust for AI systems viz. Explainability and Control.

Explainability

If we don’t understand how AI systems work, we can’t really trust them or predict the circumstances under which they will make errors. Explainability means ensuring that users of your AI system understand how it works and how well it works. Your explanations allow product creators to set the right expectations and users to calibrate their trust in the AI’s recommendations. While providing detailed explanations can be very complicated, we need to optimize our explanations for user understanding and clarity.

Control

Users should be able to second-guess the AI’s predictions. Users will trust your AI more if they feel in control of their relationship with it. Giving users some control over the algorithm makes them more likely to feel the algorithm is superior and more likely to continue to use the AI system in the future. You can do this by allowing users to edit data, choose the types of results, ignore recommendations, and correct mistakes through feedback.

Building trust is not about being right all the time; it is about integrity, accepting mistakes, and actively correcting them.

Your AI system will work alongside people and will make decisions that impact them. People and AI can work alongside each other as partners in an organization. To collaborate efficiently with your AI system, your stakeholders need to have the right level of trust. Building trust is a critical consideration when designing AI products.

The above post is an excerpt from my book ‘Designing Human-Centric AI Experiences’ on applied UX design for Artificial intelligence.

--

--