What to do when an AI-powered system fails to complete its task? Sometimes the fix isn’t about making a better algorithm but improving how humans and machines interact.
1: Human instead of machine logic
Often, the way people use AI in everyday life falls short of their needs. The users’ situations and moods change all the time, but the AI-powered systems have a hard time accommodating this. The challenges arise when users expect the system to follow human logic and the algorithmic decision-making follows machine logic.
Creating AI-powered systems for everyday contexts – such as smart assistants or recommendation systems – calls for re-humanizing design. The principles for building an algorithmic system can be very different from the ones that work perfectly in applications like diagnosing rare diseases or modelling changes in the climate.
2: Making people part of the algorithmic decision-making
Users want to adopt different kinds of roles in algorithmic decision-making in different situations. Sometimes they want to stay passive and let the AI-powered system assist them from the background as they focus on something more important, such as studying, cooking or entertaining guests.
But sometimes, they wish to adopt an active role to guide the system or collaborate with it: communicate what they want or nudge the system in the right direction. This is often difficult, if possible at all.
Designers should empower people to participate in algorithmic decision-making in simple, efficient ways to create frictionless interactions with AI-powered systems.
3: Building trust with openness
When an AI-powered system makes a mistake or completely misjudges the user’s needs, people wish to receive explanations about why the system didn’t give them the results they were hoping for – especially when the correct one seems obvious to them. They may even start to mistrust the algorithm and suspect a hidden, commercial logic behind the results.
Providing explanations on how the user’s actions and other variables affect the algorithms will build trust and increase user satisfaction. Even if the AI falters now and then, people feel that the system isn’t just a black box with which they have no channel to interact.
4: Interaction, not flawlessness
Developing AI-powered services doesn’t need to be about building the perfect algorithm. People understand how difficult it is to create a flawless program and are even willing to adjust their actions to make the system work better.
The best way to address shortcomings is to be open about the system’s constraints and promote a form of algorithmic intelligence that has limitations and works best when users guide it.
People find it helpful to receive clear and actionable information about their chances of guiding the work of recommendation algorithms. For example, in streaming services, this can be as simple as letting the system know the preferred length of the movie or show.
5: A first-world problem can be a sign of something bigger
Even though these minor hiccups with smart assistants or recommendation algorithms can seem like a first-world problem, they provide an excellent ground for exploring and further developing the relationship between AI-powered systems and their users.
When system providers understand people’s everyday needs and aspirations better, they will also become better at assessing how independent or interactive users want the algorithmic system to be.
Everyday AI is a collaboration between Alice Labs and the Centre for Consumer Society Research, University of Helsinki, in partnership with Reaktor. The Engaging With EverydAI webinar took place on 5th May at 9am CET / 10am EET.