top of page

The Machine Learning

You Never Asked For

What makes a person feel like a system is adapting to them?

Is it necessarily machine learning?

Summary

  • Tested a humanlike virtual social agent I had previously designed for real-time face-to-face interaction with people

  • Examined how people’s evaluations of this system changed over repeated interactions with the system using ethnographic fieldwork and quantitative, laboratory-based, usability testing

  • Though the system has no capacity for adaptation, people found it had adjusted to their behavior or had otherwise improved

  • Results suggest that designers cannot necessarily assume that the use of machine learning (ML) or other adaptive systems techniques will give users greater feelings of “adaptation” than a system with no ability to adapt to users
     

My Roles

Interaction designer, programmer, UX researcher

Project

This project investigated how and whether people evaluate a virtual, humanlike, artificially-intelligent creative collaborator differently over time, even if it has no ability to change the way it interacts with them. The agent was designed to take the role of a human performer of “free improvisation,” a musical practice in which musicians have more or less complete liberty in what they play:

The main goal in designing this system was that musicians would feel like it was just another human performer, and not a machine. Here’s a clip of the system playing electronic instruments with a human performer, Aram Shelton:


Methods

I used both a quantitative laboratory-based experiment as well as ethnographic participant-observation over a longer period of time.

To collect the quantitative data, I invited individual improvisers to meet with me for a brief in-person experiment at the Center for New Music and Audio Technologies in Berkeley, California:

cnmat photo.jpg

I asked each of them to play 10 short pieces with the system (2-3 minutes in length). In a random selection of these pieces, the system was set to not listen to the performers at all but improvisers were not made directly aware of this experimental condition. After each take, they graded the system on 4 criteria on a 10 point scale and provided brief, written qualitative comments. 

 

At the end of the experiment, they provided further qualitative, written comments and I conducted a brief informal interview with them about the experience as a whole.

 

For the ethnographic portion of this project, I arranged similar meetings with improvisers over time, but with a much more open-ended structure. They could play with the system as long as they liked and I let them lead the conversation about their experience playing with it.

 

Because free improvisation is a resolutely obscure musical subculture, locating participants involved a focused effort to find musicians who play this kind of music.

Results

In both laboratory and ethnographic settings, people found the system had either improved or adapted to them over several interactions with it. This is quite surprising given that the system has no ability to do this.

 

In the laboratory setting, people found that the system had improved over the course of the experiment, regardless of whether it was actually receiving audio input from them at all. Overall, they found their interactions with the system more meaningful (29.4%), that they were more satisfied with the system’s responses to their playing (21.7%), that the system’s responses were more “relevant” to their playing (19.8%), and that they felt the interaction as a whole was more inspired (18.2%).

chart of improvement.jpg

 

Similar results emerge from the larger ethnographic project. Over repeated interactions, improvisers suggest that the system had improved even though I have done nothing to change it. For example, at a concert featuring several improvisers who had played with the system before, several of them said that the system had "really grown up" and that it was "sounding a lot better" than before.

Interpretation

 

These results are strange. They suggest that people find a system “adaptive” even when it is completely incapable of adapting. The strongest evidence of this comes from the fact that people found the “relevance” of the system’s responses to their playing to increase significantly over repeated interactions.

 

So what’s happening?

 

Improved impressions from repeated exposure to the same thing, system, stimulus, etc. is a widely observed phenomenon known as the “exposure effect.” However, this effect mainly applies to things that really don’t change (i.e., photographs, still images) as opposed to something like a machine that actively responds in a humanlike manner.

 

Most likely, the perception of the system’s “improvement” over time is driven by a dynamic feedback loop:

  • The human acts.

  • The system responds.

  • The human adjusts to the system’s responses.

  • The system responds “differently,” but only because different input.

  • The human feels the system is “adapting.”

 

Implications

 

For results like these, one would assume we were talking about a system that uses ML to adapt to human behavior, but that’s not the kind of system that was tested at all. For UX or HCI practitioners, these results mean that there is no reason to assume that a ML-based system will be automatically better at making users feel like the system is adapting to them than a system that doesn’t change its way of interacting with human input. For those not using ML in UX or HCI contexts, these results also suggest that users may believe that the system or interface is adapting when you haven’t designed it to do that.

 

More broadly, these results further establish the importance of understanding what users think is happening, regardless of the factual basis of their perception of the situation. 

Related Publications

2018. “De-Instrumentalizing HCI: Social Psychology, Rapport Formation, and Interactions with Artificial Social Agents.” New Directions in 3rd Wave Human-Computer Interaction. ed. Michael Filimowicz and Veronika Tzankova. Springer.

2019. “Feeling Like an Agent.” Array: Journal of the International Computer Music Association. Fall 2019. Special Issue on Agency. ed. Rama Gottfried and Miriam Akkerman.

bottom of page