Voice User Interfaces

Alexa, Amazon’s speaker-based virtual assistant, is prolific. And it’s in good company. Siri, Cortana, and Google Assistant also seem to be here to stay. In fact, we may be in a transition from screen-based interfaces to voice user interfaces.

The rise of voice user interfaces poses new linguistic and conceptual challenges to users. Given the novelty of the technology coupled with the use of voice, users vary in the extent to which they ascribe anthropomorphic properties to the device. For example, is it a ‘she’ or an ‘it?’ Does it ‘understand’ or ‘think’? When and when not?

A current research project that I’m working on shows how people’s language variation when referring to voice user interfaces can be predicted by the extent to which they like the product, their technical expertise, and even the company’s description of the product.

Findings

Our analysis consists of approximately 30,000 user reviews of the Amazon Echo posted to Amazon.com. Using NLP techniques, we examine product references based on sentential subject, sentential object, verbs, and predicates. The figure below provides a preliminary preview of the findings.

Product References to the Amazon Echo in analysis of ~30,000 reviews: Subject Position

The figure above is a breakdown of the percentage of times Amazon Echo reviewers refer to the product using anthropomorphic verbs (i.e., from the PERSON domain) like ‘understand’ and ‘think’ coupled with four different sentential subjects: Alexa, Echo, she, and it. Example sentences include:

I love Alexa. She’s my new BFF.

It’s an impressive device.

She doesn’t understand anything. She’s an idiot.

That’s all I can talk about for now, but, if you want to know more, read my blog post here.