At Mendeley, once a month, we have a hackday. I usually try to think something different to do, for example using our data or our Open-API in a different and new way. So far I’ve built one game, a screensaver, a file system… and also a music/sound generator!
Why would a music generator be based on the user’s library documents? A popular hack is to get data from our internal services/OAPI and then create some visual representation. I wanted to change the usual sense (sight by hearing, once I changed the sight by smell sense in the olfactory notifications)
So, instead of processing data to create graphics I thought to process data and create sounds (my first idea was music but didn’t go that well).
How did I do it? I programmed a Python script, which one uses a MIDI module to generate music. It gets the readership data of the user’s library, using the Mendeley OAPI. If a document has more readers it adds a note with higher pitch. The duration of the note is proportional to the number of countries where it has readers. In a visual graphic we have two axis (X-Y), here we have two dimensions too (duration, pitch).
The result? Well, it’s not music. But it’s “the sound of your library”. If a user has documents all with the same readership it will sound different than with wider range of readerships.
For the music part: different colleagues suggested to me to use a base (rhythm base, for example) and put the notes following a certain rhythm… this should sound better.
After I did it, I did a bit of more research and some colleagues suggested some videos. A very good video is:
(found in http://vimeo.com/29893058/ )
Leave a Reply