Twitter is based solely around sending text based messages to each other. However what if we could change this and interpret the text as sound?
This thought inspired our latest hack-a-thon project. Alongside @maxtillich and @aguming. We decided to use DataSift to measure the sentiment of users and assign sounds based on how happy or sad a user is.
I’ve been itching to use the new
<audio> tag in HTML5 and it’s finally gotten support in most of the major browsers. Using @aguming music skills we created 12 different sounds to reflect each of the different moods we can determine. We then listen into DataSift and measure the average sentiment every 500 milliseconds and play the correct sound.
What we get is a slightly surreal experience of how twitter sounds.
What we have produced was about 24 hours of work, and made a brief introduction to the sound of Twitter.
This project is available on Github
UPDATE - The Listening Machine explores this idea more fully but is not associated with this project.