Music streaming services make massive use of algorithms in their music recommender systems (MRS) to guide users to tracks they are likely to enjoy. However, the black-box nature of these algorithms makes them difficult for users to understand, both in terms of how they work and the music they predict. The field of explainable AI (XAI), and in particular its “explanation” side, has emerged to make the uses of AI (including MRSs) more comprehensible to users. This paper aims to observe, using an experimental method, whether the explanation of an MRS algorithm induces a change in discovery behavior on music streaming services. In a theoretical framework, we model two types of users' discovery behaviors, namely “study” and “browse” behaviors. We then test in the lab the explanation effects on these behaviors by explaining a simplified MRS, based on the music genre criterion. Our experimental design consists in a musical listening session lasting 20 minutes, during which participants have to listen to a sample of tracks, generated using our algorithm and based on a preference questionnaire, while being able to “like”, “dislike” or “skip” them. Our main outcome measurement is the relative time spent listening to the tracks. Beyond a certain threshold, the track is considered as “studied”. This measure will enable us to observe any differences in discovery behaviors between our control groups, who receive no explanation of the MRS, and our treatment group, who receive an explanation of how the tracks are recommended.
Temporary results show that explaning recommendations increases the time of listening, reinforcing the study behavior as we predict in the model. Moreover, this effect is wider depending on the treatment level of participants. Additionnal measures such as the jumps (navigating inside the tracks by “jumping” from the begining to the middle of the track) provide additionnal insight on the effect of explanations, showing that people tend to jump less on the tracks.