We naturally think of “intelligence” as a trait belonging to individuals. We’re all—students, employees, soldiers, […]
How to design for Artificial Intelligence-enabled UI
A black box full of algorithms
With this article, I am sharing my personal learnings from creating an Artificial Intelligence-enabled UI. I developed four baseline principles which can be applied when designing for Artificial Intelligence-enabled user interfaces:
- Discovery and expectation management
- Design for forgiveness
- Data transparency and tailoring
- Privacy, security and control.
Next to the applying basic UX knowledge, keep these design principles in mind and you will have a good base for your Artificial Intelligence-enabled user interface.
Artificial Intelligence promotes efficiency and convenience
Since the dawn of the industrial age, we have been using technology to automate and eliminate labor-intensive tasks in order to become more efficient. Technology makes it possible for humans to have more convenient lifestyles.
Artificial Intelligence is an emerging technology that promotes efficiency and convenience. It’s revolutionary changing the way people interact with machines. Companies generally treat Artificial Intelligence as a technology that takes away the human from the equation and is improving convenience and productivity. A recent example is Google Artificial Intelligence Assistant (using Google Duplex technology) that takes away the “labor-intensive task” of making phone calls and empowers the human to be more efficient.
Genuine Artificial Intelligence is far beyond field’s current capabilities
What’s more, Google Duplex is actually showing the world what is currently possible (and thus what is currently not possible yet) with the technology rather than only human-centered design. The announcement of Google Duplex can be seen as a reminder that genuine Artificial Intelligence is far beyond the field’s current capabilities – even at a company with the best Artificial Intelligence experts, enormous quantities of data, and huge computing power. It also stirs up discussion on the boundaries of what people actually want to have executed by an Artificial Intelligence or not.
So what is Artificial Intelligence?
Artificial Intelligence is still considered new (while it actually emerged in 1950). People can be anxious about Artificial Intelligence because it’s a black box full of their privacy-sensitive data that runs on algorithms with undiscovered possibilities.
The term is applied when a machine mimics cognitive functions such as learning and problem-solving. Artificial Intelligence is programmed by humans (with biases) to complete certain tasks and tries to become as effective as possible with it. Moreover, the Artificial Intelligence doesn’t really “understand” or “learn” like humans do. The Artificial Intelligence just follows the human instructions it is programmed to do and will improve itself while doing so.
What Artificial Intelligence does well is quickly processing repetitive, complex and focused tasks. Artificial Intelligence is mimicking “learning” by being fed large quantities of data as humans learn by being “fed” with experiences. By being stimulated by “big data”, Artificial Intelligence can learn how to find and discover. It creates a path of least resistance to reach its goal building upon brute force algorithms.
Artificial Intelligence and its weaknesses
Also, Artificial Intelligence has its weaknesses like not being able to understand nuance or context (yet). The Artificial Intelligence behind a photo library has trouble recognizing photos with depth or photos which are upside down. This Artificial Intelligence can be tricked too. Google’s research team showed that if a sticker (“the psychedelic toaster sticker”) is added to a scene with other objects, the Artificial Intelligence will disregard the other objects and classify the scene as “a toaster”.
Image: Tom B. Brown/Dandelion Mané
Moreover, the Artificial Intelligence behind the Roomba failed to recognize puppy-feces, and as a result, made Roomba ran over it and smeared floors with feces (read the funny story here).
Designing against a black box full of algorithms
Above examples are due to the fact the Artificial Intelligence was not fed crucial “small data” and therefore was not programmed and prepared for these “sad flows” unexpected scenarios. That’s why it’s important for designers to not only uncover ideal scenarios (where everything goes as expected–e.g. no errors) but more crucially discover and prepare for the unexpected ones. The complex challenge is for (human) designers to have to design against the black box of algorithms. A black box that offers a powerful set of options–with a human to discover these possibilities.
Types of Artificial Intelligence
According to Chris Noessel (Designing for Agentive Technology, 2017), Artificial Intelligence can be broken into three categories:
- Narrow artificial intelligence – intelligence that can learn and infer, but cannot generalize. Narrow intelligence is focused on one task. This is the Artificial Intelligence that exists today.
- General intelligence – computer intelligence which would be like that of a human brain.
- Super intelligence – intelligence that would far surpass that of humans.
To avoid confusion, in this article I use the abbreviation Artificial Intelligence to refer to narrow artificial intelligence.
What are Artificial Intelligence-enabled user interfaces?
For Artificial Intelligence to be used by humans, a user interface (UI) is required. The UI is the medium where interactions between humans and machines occur.
So what are Artificial Intelligence-enabled UIs? It’s the interface with simulated cognitive functions that facilitate the interaction between humans and machines.
Examples of Artificial Intelligence-enabled user interfaces are Amazon Alexa, Nest Thermostat, Jarvis, IBM Watson, iRobot Roomba, Netflix, and Spotify.
These examples are mere manifestations. For clarity, Artificial Intelligence can be implemented anywhere – such as a search engine, behind your photo library, or in a vacuum cleaner. The UI depends on the fit of the Artificial Intelligence with the task and channel.
Within Artificial Intelligence-enabled UI, the Artificial Intelligence has the ability to understand user’s commands and present the user the best possible predictions based on all the available data it has. The interaction with the Artificial Intelligence should be easy to use, simple and efficient so users can easily accomplish their goals. The challenge designers face now is how to design Artificial Intelligence-enabled UI which users can trust with their personal data.
Developing Artificial Intelligence-enabled UI at Deloitte
The AI tool functions as an agent to the audit experts by processing big data and brute force algorithms at high speed. The AI removes the manual, time-consuming, and tedious tasks that are embedded in auditor’s current jobs such as dossier cross-checking. Moreover, the AI is able to analyze tons of documents, read a text, find trends and is able to generate predictions. It monitors and remembers auditor’s behaviors and choices, and learns from this for next time. The tool is able to convert big data into insightful information for the auditor.
Nonetheless, being an auditor requires deep professional knowledge and experience in the audit process. The AI tool still needs the auditor to understand the context of data. Therefore, the AI – being an agent – gives the final say/decision to the auditor.
Design principles for Artificial Intelligence-enabled UI
When designing the interface for the audit AI-tool, I was challenged to apply AI principles to my user experience (UX) design process. I developed four baseline principles which can be applied when designing for AI-enabled user interfaces. These principles are based on exploratory and benchmark research I did.
There are 4 categories of principles to implement:
- Discovery and expectation management
- Design for forgiveness
- Data transparency and tailoring
- Privacy, security and control
Category 1: Discovery and expectation management – Set user expectations well in order to avoid false expectations
1.Users should be aware of what the tool can, and cannot do –People are still unfamiliar with Artificial Intelligence and therefore the design needs to be more guiding. Manage expectations, and let the user know the possibilities of the tool, how the Artificial Intelligence learns and what the user needs to do to accomplish their goals. Integrate this into the on-boarding process.
For example, the only expectation I have of a pet feeder is to have it feed my cat multiple times per day without my physical presence. However, Petnet’s smart feeder offers me more than that. The Artificial Intelligence uses my cat’s weight, age, breed, and activity level to match the cat with portion sizes.
2. Users should expect most benefit from minimal input –Design the product so the user can expect a valuable product for ‘natural input’. Natural input can be defined as input that feels natural to the user, and thus feels like (nearly) zero input. It should be easy to use, efficient, and accomplish user’s goals simply and efficiently.
In the previous cat example, the pet feeder only has to be set up once. Additionally, it sends an alert when it’s running low on food. The only task I have to do is to fill it with kibbles when it prompts me to do so.
3. Prepare for undiscovered and unexpected usage – The use of Artificial Intelligence in people’s daily lives is still new. Users will discover the possibilities of using the technology for ways it was not designed for. That’s why designing for discovery is crucial. Really invest time to discover the (unexpected) possibilities of the usage of the Artificial Intelligence tool.
For example, I lost my earring which is small in size. I have carpeting, and could not locate the earring. I eventually cleared Roomba’s bin, and let it roam the room. Few moments later, I found my earring in Roomba’s bin.
4. Educate the user about the unexpected – the Artificial Intelligence will make mistakes. Human designers are prone to error and are not all-knowing. It’s a safe bet to state that users will encounter scenarios which designers did not discover and included in the algorithms (yet). Educate the users and be honest about the possibility that they might encounter sad path scenarios.
For example, if you would ask Siri the following: “Show me the nearest fast food restaurant which is not McDonald’s” – Siri will sadly show you *drumroll* the location of the nearest McDonald’s. This is because Siri is programmed to recognize words such as “nearest”, “fast food restaurant” and “McDonald’s”, however, Apple did not discover to include the possibility that users might ask Siri to not show them something.
Category 2: Design for forgiveness – The Artificial Intelligence will make mistakes. Design the UI so users are inclined to forgive it
1. Design the tool in a way that users will forgive it when it makes mistakes – A way to design for forgiveness is to use a UI that simulates creatures or objects that humans are already naturally inclined to forgive, like children or animals. Examples are “care robot Alice” which resembles a grandchild and therapeutic robot Paro, which is a stuffed animal seal. Deloitte created “Little AIME”, which is designed to not resemble a living creature.
Apple did not design Siri for users to forgive her. Siri is designed as an adult assistant. She sounds like an adult and she speaks like an adult. Therefore, users are less inclined to forgive this adult assistant when she cannot “understand” simple commands.
2. Design delightful features to increase the likelihood of forgiveness – Part of designing for forgiveness is to offer users delightful features for them to forgive the Artificial Intelligence when it makes mistakes. An example is to design humor within the UI, like with Siri and Alexa.
Cocorobo is a vacuum cleaner. The delightful feature is that it can talk to people. A result of this example is that it helped a Japanese adult feel less isolated – since the robot greeted him every time he came home (to an empty house). I bet he would forgive his little friend if it smeared his house with puppy-feces.
3. Design ability to use Artificial Intelligence without internet connectivity – It is important to avoid designing the UI that is solely relying on internet connectivity. Users should gain value from the Artificial Intelligence regardless of the fact whether it’s connected to the internet.
The Pet feeder, Google Assistant, Roomba, Spotify, or Netflix are all examples that Artificial Intelligence-enabled UIs should be able to function without internet connectivity. Voice assistants Siri and Alexa do not function without internet connectivity.
Category 3: Data transparency and tailoring – Be transparent about collecting data and offer users the ability to tailor it
1. The Artificial Intelligence should be transparent in what data it has of the user – The Artificial Intelligence possesses data of the user. With the concerns of privacy and data leaks, be transparent to users and offer them the ability to monitor Artificial Intelligence data and activity.
It’s not necessary to show the algorithms to the user, but it is important to mention which data is used in order for it to be an agent to the user.
For example, Amazon Alexa knows when you go to sleep or when you’re out of town. If she gets hacked and this information gets into the wrong hands (like burglars), it’s something to be anxious about. That’s why it’s so important to address the privacy issue here and be transparent about what data the Artificial Intelligence has.
2. Users should be able to provide input so the Artificial Intelligence can learn – Offer users the ability to tailor the collected data since the Artificial Intelligence can not apply reason and logic within context. Machines need humans to provide context via feedback. Design ways for the users to provide this input.
In the Google Translate Community, Google offers users to manually translate and confirm words and sentences. It helps the machine to learn and understand the context and to translate sentences from one language to another. Thus, the Artificial Intelligence is continuously “learning” since it is fed with big tailored data at a high speed. This way it keeps getting better in translating.
In the example of the Roomba and the puppy, the Roomba could ask what the problem is and use this input to learn for next time. This way the interaction would be beneficial for both user and the robot.
3. Users should be able to adjust what Artificial Intelligence has learned – The Artificial Intelligence configures itself based on machine learning and monitoring the user’s behavior. However, the Artificial Intelligence will make mistakes and will output predictions that the users do not desire. Therefore, besides designing for discovery and forgiveness, offer the user to tailor predictions to their liking by e.g. adjusting what the Artificial Intelligence has learned. This is what makes Artificial Intelligence unique. It can forget what it has learned completely while with humans, an experience stays and bias and assumptions are created until proven otherwise.
E.g. You own a Netflix account. Your friend does not. Your friend watched movies on your Netflix account which you don’t like – This means the result of the algorithm has been changed due to the behavior of your friend. In this case, you would like to configure the predictions by deleting your friend’s preferences/history.
Category 4: Privacy, security and control – Gain trust by driving privacy, security and the ability to control the Artificial Intelligence
1. Design top notch security for users to trust Artificial Intelligence with personal data –Current technologies offer users to secure and lock their personal data by means of face- and voice recognition, fingerprints, and two-factor authentication (e.g. combination of passwords and passcodes through call- and text messages). Design this in the UI to not only let the user feel the Artificial Intelligence is secure, but to actually secure their data and protect their privacy. A mere passcode won’t be enough. When Artificial Intelligence-enabled UIs drive privacy and security, users are more likely to trust it more.
What I noticed when jumping on the Crypto-currency train was the importance of privacy and security of the UIs that handle my money. I did not even mind the two-factor authentication setups because I knew it was for my own good.
Knowing that Artificial Intelligence will have crucial, personal and privacy sensitive data of mine – as a user it is important that the UI does its very best to protect my data.
2. Prove delivery on promises by offering test runs – Especially when a product is new, users want to test whether it can really deliver what it promises to do. When the user knows the product indeed delivers what it promised to do, the user will trust the product more.
I did this when I configured my Roomba to automatically vacuum my house every day. The first time I set this up, at 14:46, I scheduled the time for it to run automatically at [14:47]. I wanted to check whether it is indeed cleaning on scheduled time. After I saw that it did, I trust the Roomba to do this for me every day at 12:00 while I am at work.
3. Design ability for users to intervene and take over control – Design the tool in such a way that the user can take over control over the tool with means of input, anytime the user wants.
I can press two buttons on my pet feeder to output kibbles any time I want. Moreover, I can press the “clean” button to make Roomba start or stop cleaning (on unscheduled time) or physically move the Roomba to clean on a spot I direct it to.
4. Artificial Intelligence should learn from user’s intervenience – When a user intervenes and takes control over the Artificial Intelligence, let the Artificial Intelligence learn from this behavior. The Artificial Intelligence should remember the intervenience in order to give better output to the user for the next time.
For example, the Nest thermostat learns what you like by turning it up or down at different times. It remembers your intervenience and creates a personalized schedule for you.
5. Artificial Intelligence should not do anything without user’s consent – The Artificial Intelligence should ask for reviews and permission to execute tasks which have significant consequences. Yes, the Artificial Intelligence should be proactive, however, the user is still the final decision maker and he or she should confirm whether they want the Artificial Intelligence to do something proactive.
E.g. Digit is a platform that analyzes your spending habit. It acts as a chatbot and proactively asks consent from you to transfer your money in order for you to save.
6. Artificial Intelligence should notify users of system errors – Notify the user when there are hurdles. Like in any interface, the machine should be clear on what it needs from the user. The Artificial Intelligence can offer the user solutions to fix the error.
When my Roomba gets stuck, it makes a beeping noise to let me know there’s something wrong. If it did not beep, I would not know it stopped working.