Machine Body Language: Expressing a Smart Speaker’s Activity with Intelligible Physical Motion

Mirzel Avdic, Nicolai Marquardt, Yvonne Rogers, and Jo Vermeulen.
Presented by
Mirzel Avdic.

In Proceedings of DIS 2021: Proceedings of ACM Designing Interactive Systems, Nowhere and Everywhere, June 28 – July 2, 2020, 16 pages.
[26.7% acceptance; 571 submissions]


People’s physical movement and body language implicitly convey what they think and feel, are doing or are about to do. In contrast, current smart speakers miss out on this richness of body language, primarily relying on voice commands only. We present QUBI, a dynamic smart speaker that leverages expressive physical motion – stretching, nodding, turning, shrugging, wiggling, pointing and leaning forwards/backwards – to convey cues about its underlying behaviour and activities. We conducted a qualitative Wizard of Oz lab study, in which 12 participants interacted with QUBI in four scripted scenarios. From our study, we distilled six themes: (1) mirroring and mimicking motions; (2) body language to supplement voice instructions; (3) anthropomorphism and personality; (4) audio can trump motion; (5) reaffirming uncertain interpretations to support mutual understanding; and (6) emotional reactions to QUBI’s behaviour. From this, we discuss design implications for future smart speakers.