Monday, December 9, 2013

Memory Shells - Final Description


Short Description

Memory shells is an exploration of how we distort our memory of previous events and the value of this distortion in creating a narrative. The audience listens to audio of memorable and nostalgic events taken from pop culture, news and speeches. The audio is slowly modified by their facial expressions to create a narrative that attempts to approximate collective memory. These expressions model the personal biases that are introduced when sharing a memory.

Tech Description 

The system consists of of a central box and a ‘seashell’. A Raspberry Pi connected to a webcam tracks the user’s face and sends frames to a local server (MJPGStreamer). The server determines the facial expression of the user with openFrameworks and determines a set of probability values to select the next track. To include the ‘emotional bias’ of the user in the track, the volume of the track changes and noise is added using SoX. The server uses Websockets to send a command back to the rPi to play the next segment with the added modifications.

Acknowledgements

I would like to thank Dale Clifford and Zack Jacobson-Weaver for their feedback and guidance, and Marc Farra, Liang He and Meng Shi for their support and technical help.

No comments:

Post a Comment