This project is an experiment on presenting the chaos of the online data, the emotional status on social media, and show a separated identity of us in the digital world. The main part of this project is to generate noise from the data on social media(Twitter). I got related tweets that contained certain keywords through python and then recognized the sensation of the tweets through NLP. After getting the result of the sensation value, I finally use them to generate the sound in MAX/MSP, like positive emotions correspond to the cheerful C minor, and negative emotions correspond to the low F major, etc. The final effect is a dynamic, data-driven noise.
The other part of the work is a sound installation, which is an interactive part. Touching different parts of the device will trigger different sounds. Audiences can create their new sound in addition to noise.
Making Process:
Python + OSC
Mpr121 Sensor + Max/Msp