Photo by @proxyclick on Unsplash
I recently went to a bloodwork lab in my hometown for an appointment. When I arrived, there was a long line. I commented to the man closest to me, “Guess this is the place to be, huh?” He gestured at a woman sitting quietly in a chair: "She's next, but..."
The woman was anxious to the point of near tears. In heavily accented English, she said, "I can't. I don't have card." After talking with her for a few minutes, I learned that her daughter had been unable to find her insurance card at home, and had dropped her off here without it, but now the kiosk seemed to require it to move forward.
Without any clear options for how to move forward with the technology or how to work around it, the frightened woman just returned to her seat.
In addition to being an occasional visitor to the bloodwork lab, I am also a Human Computer Interaction professional, having worked in this field since the early 2000’s. Although computer kiosks have been around since about that same time, recent years have seen a dramatic surge both in the number of kiosks in use and the application for novel purposes. After the initial throes of COVID, when we began to realize that we could not simply wait out this epidemic, technology stepped in to take the place of many previously human roles. Whether a simple tablet-on-a-pole or a more substantial installation, kiosks were often brought in to fill the roles of receptionists, cashiers, or even restaurant wait staff.
Computer kiosks, a broad category of technology generally referring to a computer on a stand or in a booth where simple self-service tasks can be accomplished, have been in common usage since at least the early 2000’s. Their predecessors, such as early self-checkout counters and ATMs, date to a few decades before that. Before the ATMs were vending machines, and before the vending machines were the original kiosks in the form of news stands and other semi-permanent, semi-self-service booths.
As people emerged from COVID lock-downs, the head-sized tablet screens and pedestals at roughly the height of a human (or adjustable, for ADA compliance) made kiosks a seemingly ideal stand-in (no pun intended) for these roles, minimizing the spread of disease by eliminating those points of human contact. They could even be placed in many of the exact same locations where receptionists previously worked, so customers had an easier time recognizing what they were supposed to do.
As with many technological advancements, the emergence of kiosks into formerly human roles has highlighted many of those nuances of human interaction that we often take for granted, specifically what happens when the interaction doesn’t go as planned. Humans, both as listeners and speakers, use a myriad of cues to guide our interactions. A pause can indicate a need for clarification. Dropping eye contact can show anxiety or uncertainty. Our ears pick up quickly on an accent, and use the context—although sometimes with unintended consequences—of “this person may not understand me well” to guide their choice of words, pacing, etc. In my recent experience at the bloodwork lab, I saw first hand what happens when these human adaptations are not available, and the human cost in a situation where understanding, and being understood, can impact whether someone receives critical services.
Early Human-Computer Interaction user research was primarily focused on whether the technology worked as intended, rather than on the socioemotional aspects of the interaction. User studies largely took place at universities, with comparatively well-educated, largely white, and largely native English-speaking students as the test subjects. Each of these factors (education level, socioeconomic group, and native language) can impact how well any given person can manage an interface but, given the homogeneity of early user studies, many of these complexities simply didn’t come up.
Recent decades have seen increased interest in these tools for global business collaborations or civic technology applications. User research has made great strides in understanding how socioeconomic factors impact usability, beyond whether the tool “works.” These impacts can become even more pronounced beyond the research lab. The same interface which is interpretable in a lab setting can be much trickier in a loud, crowded shopping mall or with the social pressure of a line building behind the user.
While research has increased on the impact of socioeconomic factors and usability, we still struggle with intersectionality and usability. A tool may be usable by someone who is a non-native English speaker, or who is older than the typical user, or who—for whatever reason—has a low trust in or familiarity with technology. But what happens when all of these factors collide?
As someone with both the UX skills and the social and linguistic skills to feel confident that there was in fact a way forward, whether through the kiosk or around it, I reached out to the woman in the lab. I asked her quietly if she would like me to go through the process with her, and she accepted with such relief that tears slipped down her cheek. When it was her turn, I walked up with her. The kiosk did have several languages, but not her native language, Vietnamese. I read her the questions, clarifying only a few. Mostly I just read the questions in English, she answered me in English, and I typed it. I knew how to skip past the insurance card step and still get her an appointment. Then she sat down and I fell back behind the original next-in-line man.
On the surface, the kiosk “worked.” She (with some guidance and maybe more importantly some support) got through the check-in using English, and the UI did allow for her to proceed even without the forgotten insurance card. So what went wrong, such that this woman almost left without the medical care she needed?
It comes down to intersectionality and context.
If she were an older native English speaking woman, without her insurance card, she would likely have felt more confident speaking up about her needs, blaming the kiosk, or simply knocking on the office door for help. She would have had the cultural confidence to know there were ways around the kiosk.
If she were a non-native English speaker, and older, but had her insurance, she likely would have gotten through the process. She may have been anxious, but she could have navigated it.
If she were younger, but still a non-native English speaker without her card, she would likely at least be more familiar with technology/tablets in general, and less easily intimidated.
The human factor which is hardest to test for, and is a very likely outcome of an intersectional collision, is the emotional response. She was afraid.
She spoke the system’s default language, but not fluently. She knew to use the kiosk, but not that she could get out of using it (by banging on the receptionist’s door) if it didn’t work. She knew what materials she needed, but not what to do if she didn’t have them. She was being stared at, and was holding up the line, and she knew it.
We can’t test for every combination, but we can do better.
As UX professionals, we can – and should – test for more combinations. We can do the homework to know our populations, and test with these same people rather than the easy-to-recruit demographics. And we can test in more realistic situations, such as noisy, stressful waiting rooms.
As organizations implementing kiosks (or other self-serve technology), we can recognize that UX design and testing is limited and fallable, and have a plan for complicated situations so that people are not denied services due to a human-computer mismatch.
During COVID, there was a push to eliminate human contact wherever possible. And now with advances in AI, that tendency will likely only become more prevalent
Efficiency is good. The best systems are those that can process each need with the least effort. A majority of customers (native English speakers who have used an iPad and have their cards) will go through the kiosk in a minute or less. And most other users will still get through the system well enough.
At the same time, even the best technology needs to always have an off-ramp. In healthcare, we can assume that fear, intimidation, and anxiety are likely even without the added challenges this woman faced. There will always be cases where the technology doesn’t work for that person, in that situation, for that purpose. And it is not acceptable for one woman to go home without medical care because of a kiosk. Have the kiosk. Have the AI. Maximize efficiency where you can, in all the ways you can. But do it so that we can preserve bandwidth for the person who is afraid and needs help.
I lead A1M’s design practice. I’ve spent my career as a user experience and service designer applying design to the nonprofit and government services space.
We are dedicated to creating simple, sustainable solutions. Learn more about our Services.