Spotify has changed the way many of us listen to music. Thanks to its convenience and connectivity, it means almost anyone can have access to a whole library of songs wherever they are.
However, the service hasn’t been without controversy over the years since its inception - and now it's courting another media storm over a function that could monitor users' emotions in order to make music suggestions.
Spotify's proposal for speech recognition
The issue first came to light when Spotify applied back in 2018 for a patent for technology that would allow it to analyze users' voices to suggest songs based on their "emotional state, gender, age, or accent".
It wasn't until January 2021, when the patent was formally granted, that the world began to sit up and take notice about what this could actually mean for the development of this type of artificial intelligence in future.
Spotify said the tech would allow it to make observations about a user's environment and then play music to reflect their mood or social setting. For example, it might deduce that someone is throwing a party and so play tracks that are upbeat.
This is part of a plan to make Spotify more personal for its 345 million active users and seems to be a challenge to the dominance of existing voice assistants such as Alexa and Siri.
Currently, Spotify uses a decision tree to refine recommendation algorithms, but this new development would also allow for contextual clues such as intonation and stress to work out whether people are happy, angry or neutral, as well as what settings they're in.
"Numerous other characterizations and classifications can be used," the patent filing said, adding that this should eventually result in less human input for users (such as not having to answer questions about the music they like prior to use).
What are the implications of this?
Spotify hopes that understanding how its users feel could create a better and more reactive service. For instance, if it can recognize that a person sounds tired, it could recommend a chillout playlist, or it might stop playing tracks that evoke anger if someone sounds frustrated.
Another function could be learning more rapidly what artists or tracks people don't like. Technology analyst Simon Forrest pointed out in an interview with Forbes that if someone tuts or makes an 'urgh' sound when a particular track comes on, Spotify might deduce the user dislikes that song and skip it before they have to issue a command.
This comes hot on the heels of a karaoke-like feature Spotify recently rolled out, as well as a study on how people's personality traits can influence their musical preferences.
Why the controversy - isn't this already a thing?
Spotify has already said that, so far, this is simply a patent application and that it might never actually make its way into common use.
And even if it did, a recent Axios article pointed out that emotion-recognition technology is already being widely used in security, remote working platforms and even education. Meanwhile, Amazon is reportedly working on a wearable device that can read emotions, Netflix personalizes recommendations based on mood and Twitter uses feelings to generate its timelines.
What's more, Spotify's existing technology could also be said to rely on personal feelings, with a recent study arguing that playlists are simply a means to "cultivate moods and emotions".
The argument for emotion-based AI
Spotify's supporters say there's no problem, and that we actually need more instinctive technology. After all, artificial intelligence is becoming more and more a part of our everyday lives, and Simon Forrest argues that if we want it to become truly intuitive, then we’re going to have to embrace developments such as this.
Indeed, by helping more businesses to get a deeper understanding of their customers, AI could help people benefit from discounts, marketing that’s less intrusive and more applicable to them and improved service suggestions.
There might also be positive implications for safety stemming from Spotify's potential rollout. For example, recent research found that listening to higher-tempo music at the wheel could lead to more erratic driving and more dangerous maneuvers. Perhaps recognizing that users are stressed may allow Spotify to swap from Green Day to Toto and even prevent road accidents.
Another possible application might be flagging up situations where people are in emotional turmoil. For instance, studies have shown that people with depression often listen to sad music to make them feel better. If a user frequently does this and provides other vocal clues indicating distress, maybe Spotify could play adverts for mental health charities or send notifications of their contact details to the person's phone?
Protests against Spotify's use of emotions
However, many people are far from happy about Spotify's revelation. Musician and activist Evan Greer called it "beyond chilling" and has launched a campaign against the patent.
A coalition of artists and human rights organizations has also sent an open letter to Spotify urging it to "never use, license, sell, or monetize its new speech-recognition patent technology", claiming this would give it a dangerous position of power over its users.
Critics argue that the tech could make users a target for government snooping, as well as leading to discrimination against transgender people and those with low mood being taken advantage of by advertisers.
Indeed, there are certainly potential issues with identification and inference. For example, a study from New York University recently discovered that people who like 'Lose Yourself' by Eminem are more likely to score highly on a scale of psychopathy than Dire Straits fans. This could raise concerns such as authorities obtaining this type of information via Spotify and using it to place restrictions on individuals, or employers using it as an excuse to turn down job applicants.
Another potential issue is that any form of advanced AI could be a treasure trove for hackers. According to the 2021 Cyber Security Statistics report from PurpleSec, there are more than 30 million attacks worldwide every year, which equates to 80,000 per day.
Companies holding more information on consumers could increase people's vulnerability to having their money or identities stolen.
What's next?
The furor over Spotify's plans is still rumbling on, so it remains to be seen whether the company will continue with its project or back down in the face of criticism. However, since an app researcher reportedly recently discovered code for a voice activation feature on its platform - Hey Spotify - it may be that we already have our answer.
Access the latest business knowledge in Marketing
Get Access
Comments
Join the conversation...