Space for What is, Place for What Will Become (2024)

Collegium Hungaricum

Photo Credit: Kathrin Scheidt

Motivations for the Project:

Space for what is, place for what will become is a collective experiencing project. It is built from a yearning for connection and community that tells stories of our shared experience, which acknowledges trauma, but also seeks to focus on our strength and revolutionary potential. It sees sound as a locus of wisdom, energy, and a potent tool for unlocking our bodies. It creates shared environments that move beyond dystopian world views and instead holds space for imagining and manifesting the types of futures we would like to move towards.

Thesis

‘Cultivating Revolutionary Practices through Sonic Storysharing that Empower Community & Resist Colonialism’

Theoretical Concerns:

The activated performance installation piece is the practical extension of my master thesis where I combined my professional and academic experience as a philosopher, an electroacoustic composer, a technician and a facilitator. I explore these elements through a new lens as a sound artist. Sound art allows me to create installations and active embodied listening experiences for visitors to participate in, where we are all responsible for the sonic outcomes.

This piece is born of a commitment to sonic and artistic expression with political and social dimensions. My sound performance installation works are potent sites for healing, collective transformation, and practising new tools that imagine post-colonial futures. It is indebted to Indigenous wisdom, aiming not to extract but to answer the call from Indigenous leaders; for colonially ancestral people to take on this work, and in doing so reduce the burden and expectation of labour on Indigenous peoples. It ultimately hopes to be a locus for exploring and dismantling white supremacy through individual contemplation and collective experiencing.

To dig deeper into the ideas that informed this work, please read my thesis.

Compositional Design:

The piece was an improvisation that was explored by uninitiated, untrained audience-turn-performers. Thus the design of the score and the boundaries of interaction were carefully designed to balance the need for expression with the ability to create something that felt like something (if that makes sense!). Throughout each piece, simple directions were projected on a screen. Otherwise, it was open-format, allowing participants to experiment and explore. I determined the boundaries of each section by turning down the volume and announcing the end when the time felt right. There were four parts focussing on different social and political concepts as well as technical means of expression.

Image of a participant using the MUGIC sensors in the performance.

Collective resonance

Focus on sensing chairs and improvised vocal performance by me. This piece was designed to get people actively participating and to connect to the central goal of building a community sense through co-creation.

White Silence

A passive piece, drones of high and low frequency with throbbing and undulating textures that created a sense of intensity and unease. People were asked not to move so that they sat with these ideas and sound sensations.

Swarming intentions

Spoken intentions recorded live and granulated with a patch I build in gen, mugic sensors sonified and spatialised the textured phrases.

Ensemble

The final piece where the additional hit and stomp boxes where activated and every element was turned on and people were able to explore the objects and environment as a playground.

Technical Information:

Photo Credit: Kathrin Scheidt - Raspberry PI processing the sensor data and sending to the main computer via UDP/OSC

This piece was a continuation of a variety of technical research topics. I continued working with sensors and explored some new territory in programming. The piece is built structurally and visually around 6 sensing chairs. Each chair had a vibrating cushion that would activate tones. Ultrasonic sensors attached to the chairs translated movement data that altered the pitch content of the tones. Some sensors faced inwards and others outwards. the result was a live improvised sound piece based on the movements and interactions of the participants. In addition to the sensing chairs I also included other interactive sounding objects; a stomp box that encouraged people to step and move with their body and feet and a hit box invited more nuanced interaction akin to a drum. I also utilised muGic sensors that allowed for more free-flowing movements around the room. these devices were augmented through sampling and synthesis programs that I developed in max and gen ported to max 4 live for the performance. The sensors were run on raspberry Pi that sent the data via UDP/OSC to max patches that were ported for Ableton live.

Video Documentation

Below is a full-length video documentation of the performance experience, amalgamated from the three performances that occurred.