Introduction
One day, a project landed on my desk—creating a proposal for a company that wanted to transform its customer service experience with ChatGPT. There was just one catch—I'd never really ventured into the realm of AI before. And let me tell you, I was utterly lost.
For years, I'd been crafting digital interfaces, where the user tells the computer exactly what to do, step by step, to eventually get the results users desired. But AI? That was a whole different story. The user no longer tells the computer what to do. Instead, they tell the computer what they want, starting by formulating the desired outcome as the first thing.
So there I was, standing at the crossroads of design and AI, desperately scavenging the internet for a comprehensive list of design principles tailored to AI interfaces. Guess what? It was slim pickings. So, in a fit of inspiration (and perhaps a dash of desperation), I decided to roll up my sleeves and create my own list. And today, I'm thrilled to share it with you.
General principles
Let's kick off this adventure with the foundation of our AI journey—basic principles that will set the stage for designing AI-powered interactions that leave a lasting impact.
01 Determine if AI adds value
Don't fall into the trap of AI-for-everything. Sure, AI is all the rage, and it's tempting to throw it at every problem like it's the magical solution. But sometimes, sticking to good old heuristics or manual control can actually create smoother experiences.
So, before you ask, "Can we use AI to ___________?" try shifting the focus to something more human-centered: "How might we solve ___________?" Once you've got a clear problem to tackle, then and only then ask, "Can AI swoop in and save the day with a fresh, innovative approach?"
02 Don’t pretend AI is human
Making dialogue systems seem all human-like might initially seem like a good idea. After all, who wouldn't prefer a conversational buddy over a cold, robotic exchange?
But beware – going all-in on anthropomorphism can be a slippery slope. What might seem like the AI understanding things like a human is, in reality, just a complex series of calculations predicting the next word in a sentence. Our goal as designers is to ensure users don't place undue trust in the AI, mistaking it for a genuine human presence, which can lead to them trusting generated information unquestioningly and even sharing sensitive data.
To make AI appear more human, some applications place human-like avatars throughout their interface. Others are programmed to chat as if they had human feelings, mimicking your everyday chat with a friend. Or, in customer service, for instance, some disguise automatically generated messages to sound as if they came from a human agent.
Now, don't get me wrong; I'm not here to rain on the parade of human-like elements in AI interfaces. They can be quite engaging when used correctly. However, before you jump on the anthropomorphism bandwagon, take a moment to consider the risks involved.
03 Don’t pretend it’s magic either
Now, on the other hand of the spectrum, we also don’t want to pretend AI is this magical technology that can do no wrong. It's our job to help users form realistic expectations about what AI can and can't do. Transparency is your best friend here, it saves the users from disappointment down the road.
In fact, AI can have its moments of bias, discrimination, or just plain old inaccuracies. While engineers can certainly work diligently to minimize these shortcomings, designers play a crucial role in managing user expectations.
Many apps and websites address this head-on during the onboarding process, saying something like, "Hey there, just a quick heads-up: I can sometimes get things wrong. Oh, and by the way, please don’t share any sensitive information here." This kind of upfront honesty not only builds trust but also helps users steer clear of overreliance on AI and the potential pitfalls that come with it.
04 Think beyond the chat
When we think of LLMs, we often gravitate toward the familiar world of conversational AI or chatbots. It's the default, the go-to concept that pops into our minds first. However, it's essential to realize that we're not tethered to this singular mode of interaction.
Pause to consider some alternatives. Maybe a contextual menu or a command palette encouraging users to transform or generate text? Or an autocomplete feature that provides suggestions, enhancing the entire writing process. Whether your product is for crafting code or composing prose, give some chat alternatives a try.
Onboarding
Now, let’s take a look at something more practical. We’ll break things down into four groups: onboarding, input design patterns, output, feedback, and lastly - error handling.
05 Explain the benefit, not the technology
Picture this scenario: You and your brilliant team have worked tirelessly to integrate the latest LLM technology into your product. You're bursting with excitement to show it off: "Download our app because it's the most current, innovative, and advanced!" But is it the best way to convince users to choose your app from all other apps on the market?
Users are not seeking the nuts and bolts of your technology but how to improve their lives and address their specific needs: “What can this app do for me?”. To convince the user to give your app or website a try, explain the benefit, not the technology. It's not about the rocket ship; it's about the incredible places it can take them.
06 Anchor AI use in a common pattern
This is a powerful strategy where you leverage something users already know and understand as a springboard to introduce them to something new and innovative. When you introduce AI in a way that aligns with familiar interactions, you essentially reduce the learning curve for users. They don't have to start from scratch; they can build on what they already know. This familiarity not only eases their entry into the AI experience but also significantly promotes user adoption.
Ask yourself this question: "Is there a commonly used pattern you can base the interaction on?" Take, for instance, the ever-present search bar. Think about how users instinctively type in their queries, and as they do, they're greeted with predictive suggestions powered by LLM. Or, the trusty autocomplete feature - something that most users have encountered while writing emails or text messages. The LMM anticipates the next word or phrase, creating a seamless writing experience.
So, when designing AI interactions, try to blend the familiar with the innovative, helping users seamlessly transition into using the new technology with ease.
07 Guide the first interactions
But your job doesn't end there. Put an extra design effort to guide the user through their first interactions. How can we show the users what type of interaction is expected to ensure their success in those critical first interactions?
You might opt for step-by-step instructions, tooltips that provide guidance at the right moment, meaningful input field placeholders, or even links to instructional videos for those who feel they need extra hand-holding.
Another effective strategy is showcasing real-life examples of the types of inputs that can yield impressive results. By illustrating through examples, you're essentially offering users a glimpse into the possibilities and encouraging them to explore further.
The goal is to ensure that users don't feel left alone to figure out how to interact with your new app or website.
Input
With users comfortably onboarded, the stage is set for them to provide input.
Here’s where the real problem begins. Users must articulate their thoughts and goals effectively to get a meaningful response from the LLM. This often proves to be a formidable task.
Recent literacy research shows that about half of the people in wealthy countries like the US and Germany struggle with low reading comprehension skills. And writing is often more complex than reading and comprehending written text. This means even more individuals may struggle with expressing themselves effectively in their own words.
Writing well-formed prompts or messages requires a degree of articulation, so how can we aid our users in this challenging task?
08 Provide pre-selected options (sometimes)
Giving the user pre-selected options for the input field can be like offering a guiding hand, helping users navigate the system, and discovering potential avenues for interaction. It's not just about making their experience smoother; it's about fostering confidence. Users gradually grasp the expected format of the input, building their skills and assurance to eventually craft their custom inputs.
But beware! Overreliance on predefined responses can be a double-edged sword. While they certainly have their merits, it might backfire if users are constantly tethered to pre-selected options. After all, one of the remarkable strengths of LLMs is their ability to interpret and respond to natural language. This inherent capability remains underutilized if users are always confined to suggested prompts.
09 Accept or ignore autocomplete
Autocomplete is an alternative method of providing input suggestions. It helps avoid typos and save time.
Perhaps one of the most compelling aspects of the autocomplete pattern is the reassurance it provides to users. It offers a safety net, ensuring that their input is correct and in harmony with the context. Yet, users also need to maintain control over accepting or rejecting the suggestion. This empowers users, making them feel like they are in the driver's seat.
Designers can maximize the benefits of autocomplete by ensuring that AI interactions remain collaborative and empowering while preserving user agency at all times. Users need to have crystal clarity about what their input is and what is auto-completed. They also need to maintain control over accepting or rejecting the suggestion. It's all about striking the balance between guidance and user autonomy.
10 Skip writing altogether
Here's a unique concept: sometimes, you don't need an input field to utilize the LLM. Many things can be done without the need for explicit verbalization.
In essence, offering users a menu of actions or options frees them from formulating their requests from scratch and enables them to select from predefined choices. This approach enhances usability, simplifying the interaction.
But isn’t this in contradiction with what we discussed about utilizing the remarkable power of LLMs to comprehend natural language? It sure is. However, there are moments when prioritizing superior usability is the way to go. This is particularly valuable in scenarios where articulating a desired output is exceptionally challenging. In such cases, offering predefined options ensures that the AI interaction remains accessible.
Output
Alright, we've got user input in the mix, and now comes the pivotal moment of deciding how to showcase the LLM-generated output.
10 Use markup, enrich the text
You can use markdown features like headings, bullet points, and numbered lists to break up the monotony of the generated text. This can work wonders in breaking down information and making it more digestible for users. But don’t let me stop you at that! Add graphs and charts, and create tables to convey complex information. They add a layer of organization that enhances user understanding in a way that plain text just can’t.
One more powerful strategy is to add direct links to the products you’re offering within the conversations. For example, an online store chat assistant can inform you about the latest fashion trends and then attach a carousel of relevant products ready for purchase. This enhances user engagement and simplifies the user journey, creating a more fluid and satisfying experience.
11 Mind the generation times
LLM-generated responses may take a moment to materialize, and it's essential to keep users informed and engaged during this brief wait. You can incorporate a visual indicator, such as a loading spinner or progress bar complete with an estimated time. You can also use messaging like "We're working on it." Just let users know that their query is being addressed. This transparency manages user expectations and indicates that the app is actively working, making the wait more bearable.
But among the arsenal of strategies, streaming answers as they are generated has been my favorite so far. This approach not only keeps users engaged but also minimizes the perceived wait time. Users can read the response as it appears on the screen, making the interaction feel dynamic.
12 Mind the tone of voice
The tone of voice plays a pivotal role here. It's not just about what you say but how you say it. One approach is to pre-assign a specific tone of voice or communication style to the outputs generated by the LLM. For instance, you can instruct the AI to maintain a professional tone or consistently inject creative flair into the conversation. This can work well for applications where a consistent and predefined voice is needed, such as customer support or technical assistance.
Alternatively, you can offer users the power to define the tone themselves. Let’s say the user wants to create a dating profile. Do they want to sound bold, adventurous, or gentler and kind? By allowing users to choose the desired tone, you empower them to express their individuality and tailor the generated output precisely to their unique needs and personality.
13 Show the reasoning behind the output
The inner workings of LLMs resemble a black box. These intricate models perform complex computations, and explaining precisely how they arrive at their outputs is an impossible task. Nonetheless, sometimes, we need to provide users with transparency and understanding.
In some cases, such as a medical treatment recommendation or a credit approval tool, the potential consequences of these AI-generated suggestions can carry profound implications that can be genuinely life-altering. This underscores the necessity for transparent, explainable, and accountable AI systems to ensure the equitable treatment of individuals who rely on them.
LLMs are powerful, but they can inherit biases in their training data, leading to unintended discrimination. For example, in a hiring platform that uses an LLM to screen applicants, the model might inadvertently favor one demographic over another, perpetuating bias. When the AI provides a recommendation, it's vital that this rationale is presented in clear and understandable human language so we can gain a deeper understanding of why a particular decision was made.
Ensuring that LLMs produce results that are impartial and unbiased is not just a matter of fairness; it's a fundamental principle of responsible AI deployment.
14 Sources!
This will surprise nobody. Nurturing trust through source transparency is one of the most straightforward and clear-cut strategies. It involves the system openly sharing where it gets the knowledge it used to produce the generated results. This enables users to assess the credibility and reliability of the information they receive. The sources can originate from various sources such as articles on the internet, internal company resources, or other references. The more reputable, the better.
Feedback
Now, a quick section about feedback. In the world of AI-generated content, the user's voice is a powerful tool for refinement. Creating a mechanism for users to provide feedback on the responses generated by LLMs can be a game-changer in the quest to fine-tune and refine its capabilities.
15 Encourage explicit feedback
Explicit feedback refers to a direct and precise response to AI-generated content. Was the information provided helpful or not so much? Explicit feedback empowers users to actively participate in improving AI interactions
This feedback can take various forms, from simple ratings like thumbs-up or thumbs-down to more specific responses such as answering direct questions or rating systems.
16 Enable implicit feedback
While explicit feedback is undeniably valuable, there's an even more potent player in AI refinement – implicit feedback. Implicit feedback, derived from user interactions, requires no direct input. It is more subtle and often more reliable. Instead of soliciting user opinions directly, implicit feedback focuses on how users engage with AI systems, extracting insights from their behavior.
Are users clicking on the provided links? How much time are they spending reading certain content? And here's an interesting tidbit: If they're copying and pasting a response, it was pretty helpful to them.
In e-commerce, implicit feedback takes its cues from how users engage with the recommended products and, ultimately, their purchase history. If you find those sneaker recommendations irresistible and they make their way into your shopping cart, that's a high-five moment for implicit feedback.
Error handling
One thing is for sure – errors are bound to happen. No system is infallible, and handling errors and mistakes is a crucial aspect of ensuring smooth and reliable interactions.
17 Stop generating
There are occasions when the AI may inadvertently go off track, generating content that's either irrelevant or, in more severe cases, inappropriate.
Imagine you're using an LLM-powered content generation tool to draft an article on technology trends. However, the AI suddenly starts producing paragraphs about gardening tips. In such a situation, you, as the user, can already see that this is going nowhere. Instead of waiting for the whole article to be generated, you should have the power to intervene and instruct the AI to stop creating content unrelated to the intended topic.
18 Regenerate response
Let’s say that after the LLM generates a paragraph, the user reviews it and notices that while the content is on the right track, it could be a bit more engaging. Instead of starting from scratch, you can offer an option to click "regenerate response." The LLM then provides you with an alternative version, which might use different wording or structure. Users can fine-tune the response without the hassle of starting over.
As a fun fact, here's a little-known feature – the ability to rotate autocomplete suggestions in tools like GitHub Copilot. You can use a keyboard to cycle through various alternatives when using autocomplete suggestions to write code. It's a feature that power users often appreciate. It's hard to discover, but how powerful!
19 Suggest alternatives
What happens when the LLM doesn't quite get your request? That's where suggesting alternative phrasings comes into play.
We accomplish two important things by highlighting effective queries and offering alternative phrasings. First, users can easily tweak their questions for more accurate and helpful responses. Secondly, this practice is a subtle way of teaching users the most effective query formats, ultimately enhancing the overall effectiveness of AI interactions in the long term.
20 Give control back to the user
Sometimes, regenerating the response or using the alternatives is insufficient. LMM-generated suggestion just doesn’t match our user needs precisely. Here is where a crucial principle of returning control to the user steps in. It's about ensuring that users have the final say so they can make choices that align with their goals without being solely reliant on the system's recommendations.
Just imagine you can't alter the content generated by an AI! Let’s say you can’t edit the article generated for your website, or you're compelled to stick to the autocomplete suggestion you have opted in when writing code. In most cases, AI should act as a helpful assistant, not a decision-maker.
21 Facilitate human hand-off
There are moments when Language Models (LLMs) capabilities reach their limits or situations where human expertise and intervention are essential. To navigate these scenarios, it's crucial to have mechanisms in place that allow for a hand-off to human agents.
For example, we have a user conversing with a customer service chatbot to resolve an issue with a recent online purchase. The chatbot can handle basic queries and guide common problems. However, if the user requests a custom refund requiring human authorization, the chatbot should seamlessly hand the user over to a human customer service representative. Don’t make the user explain their issue yet again!
In another scenario, a user might seek guidance from an AI-based mental health chatbot. While the bot can offer valuable resources and support, there are times when human expertise is irreplaceable. Suppose the chatbot detects that you're in a crisis or need immediate emotional help. In that case, it can facilitate a hand-off to a human therapist, ensuring you receive the care only a human can provide.
While AI can be undeniably helpful in many aspects of our lives, there are situations where human expertise and intervention remain irreplaceable. This human-AI collaboration ensures that user needs are met, even when AI encounters its boundaries.
Outro
So, that's a wrap on our journey through designing Language-Based User Interfaces powered by AI. We've touched on some fundamental principles that can help steer your design efforts in the right direction. But remember that the world of AI and design is like an ever-changing puzzle, with pieces constantly shifting.
This is just a snapshot of the current landscape, a peek into our toolkit. As AI evolves, so will the challenges and possibilities.
Embrace the uncertainty, stay curious, and continue exploring.
It's a collaborative journey that none of us have figured out yet.
So, think of this list as a starting point, something to provide you with more assurance as you embark on your journey of designing AI-driven user interfaces.
Good luck, and relish this exciting time in design, where innovation knows no bounds!
Some sources:
- https://pair.withgoogle.com/guidebook/patterns
- https://fullstackdeeplearning.com/llm-bootcamp/spring-2023/ux-for-luis/
- https://www.nngroup.com/articles/ai-paradigm/
- https://courses.minnalearn.com/en/courses/trustworthy-ai/overview/
How to design digital products with LLMs is a practical use case of effective machine learning models. Reaktor takes part in the European Industrial Grade Machine Learning for Enterprises research program that develops such things.