Highlight
Google for music
Achievement/Results
NSF-funded researchers Luke Barrington and Gert Lanckriet at the University of California, San Diego have developed and launched an online computer game that collects associations between music and descriptive words or “tags”. This “human computation” data is used to train a machine that learns how people perceive music and can describe millions of songs with thousands of relevant words.
“Herd It” (www.herdit.org) is an online, social music game, deployed on the Facebook social network, that is collecting and constantly updating an extended set of reliably-labeled music information, at low cost. Players of Herd It are connected in real time with “the Herd” – other music fans who are playing at the same time. All players listen to the same piece of music and Herd It offers quick, fun minigames that invite players to describe what they hear. Players score points if their description overlaps with the rest of the Herd. In this way, Herd It collects the consensus opinion about how people describe the music. An example of the gameplay is shown in Figure 1, where players are asked to describe the emotion of the music that they hear.
Herd It offers competitive, real-time, interactive game play that motivates players to provide accurate tags. Players enjoy the social experience of music discovery with like-minded music fans and, while they play, Herd It collects reliable associations between music and tags. Harnessing the wisdom of the crowd in this way allows the collection of the first large-scale data set of audio clips associated with high-quality descriptive tags, which would be difficult and expensive with traditional methods (e.g., surveys, professional labelers). Herd It also collects demographic and sociographic information about players, offering a hitherto unavailable level of insight into the subjective experience of how different people hear and describe music.
Herd It is a game with a purpose. The data contributed by players are used as examples for a computer audition system that learns to associate music with descriptive tags. Trained on a limited amount of data, this system has already demonstrated human-levels of accuracy in semantically annotating music with hundresd of words. By adding the knowledge collected by Herd It, the resulting system will have a rich understanding of how to describe millions of songs with thousands of musically-relevant words. This system powers a semantic search engine – “Google for music” – that allows music fans to discover the perfect song to suit their mood, by describing what they would like to hear. This frees music lovers from having to know the names of artists or songs and has the potential to open up the vast amounts of undiscovered music to a huge audience. As an example, the results of a search query for “funky party music with a horn section”, including automatically-generated semantic descriptions of each song, is shown in Figure 2.
This work is a great example of how human computation can be used to greatly improve machine learning. Luke Barrington is an NSF IGERT (Integrative Graduate Education and Research Traineeship) fellow in the Vision and Learning in Humans and Machines Traineeship program at UCSD run by Professors Virginia de Sa and Garrison Cottrell.
Address Goals
This work provides a “google for music” that has practical application. It may also help us learn more about how humans perceive music.
The Herd It game is a great experimental tool and the music classification algorithm is also a useful tool.