Smart Frictionless Experiences
Back in May, at Google’s annual I/O conference, Sundar Pichai revealed Google’s new AI voice assistant, Duplex. Pichai played an audio recording of a salon appointment being made. On one end of the phone was the salon manager, and on the other end was Duplex, conversing just like a human would, including ‘ums’ and ‘ahs’ in all the right places for slick, if not slightly uncanny effect. The booking was made successfully, and a notification sent to the customers phone. A mind-blowing demonstration.
Pichai framed the demo as a time-saver, outlining a few examples of the kinds of repetitive daily interactions we have with people over the phone that could become a thing of the past when Duplex is out in the wild — booking restaurants, calling a plumber, ordering a hit on your neighbour that plays loud music on a Sunday night. But, apart from a prison sentence, at what cost does convenience come?
It’s clear from the demonstration that the salon manager has no idea they’re conversing with a bot. Does it matter? In the case of small, one off bookings like this you could argue that the benefit outweighs the cost. But when we consider this as the beginnings of a behavioural pattern-shift, the scenario begins to echo some other situations in which we’re just beginning to become enlightened to connectivity’s ability to disconnect us — or even push us apart.
For most of us reading this, up until now we may feel that we have been the agents in our relationships with technology. We are the ones that push buttons on phones to send messages or emails to each other, or use the self-checkout at the supermarket, or even let Spotify pick our songs for us. We make conscious choices about when to use technology to abstract our daily interactions with one another, or when to just pick up the phone to talk to another person. But as starkly illustrated in the impressive Google demonstration, in some scenarios we don’t have that choice, and we often don’t even realise it’s being denied to us. Convenience 1, agency nil.
But what do these small connectivity-powered agency-infractions have to do with disconnection, or division, and as designers what might we do about it? Social media offers us the clearest and most powerful of examples for the link between inscrutability and intent. By now, given the revelations surrounding the US elections and Cambridge Analytica, we must all be aware of the concept of a filter-bubble — the idea that we insulate ourselves from opposing viewpoints by only listening to those we already agree with. In recent months, it’s become increasingly apparent that those voices we feel we have most affinity with, or conflict with, may not exist at all. Such is the opaque nature of how content ends up in our feeds, it’s impossible to know whether voices are honest expressions of opinion, or cynical attempts at manipulation.
How can we make informed decisions when deciding to act out our online lives if that information is being deliberately distorted or denied to us?
As designers we often talk about seamless experiences — interactions in which tasks are executed almost without friction or inconvenience. The idea is that seamless experiences limit frustration, lead to increased and enjoyable usage, and a better bottom line for the client. The Google Duplex voice assistant is a good example of the future benchmarks of seamless experiences, and further advances in machine learning are accelerating us towards zero-friction experiences, and not just in interactions, but in services as a whole. But seamlessness often camouflages the links between technology, people and data and renders the workings behind the service invisible to the users at both ends.
To take one example, insurance companies are using machine learning to augment their risk-assessment techniques in insurance underwriting. Insurers are able to take multiple and disparate datasets into account in addition to the submitted application and make decisions based on patterns identified by the computer. When it comes to communicating the outcome decision to a client, it may not always be possible to articulate why an application has been refused, or even which data sets have been used. As a customer, this can give a sense of distrust, disempowerment and frustration. Where a company can describe how and why a decision was reached, the customer is in a better position to appeal the decision or be informed of how to make positive steps to improve their situation.
When designing new connected experiences that are powered by long service-chains, multiple data points, opaque processes and impact various people or groups, we must remember that friction-free experiences are only enjoyable when they work in our favour. Giving some thought to the idea of illustrating the provenance of an interaction or service can help to empower customers to find the information they need to take a positive next step — completing a transaction or application for example. This is especially true when the service is of a new type or uses a service-chain that customers may not be aware of or familiar with. Without appropriate signposting and accountability, we may find ourselves in Kafka-esque corridors of bots, hunting for a way out.