Concert 1
Thursday, March 13, 2025
Bloomberg Center Theater
1pm
Antithesis (this is the part where I scream) by Maxwell Miller
Maxwell Miller, guitar & voice
the New Pulsar Generator by Marcin Pietruszewski
Marcin Pietruszewski, the New Pulsar Generator
RILF by Rachel Devorah Rome
Rachel Devorah Rome, electronics
Here comes a candle to light you to bed by Marcin Pączkowski
Marcin Pączkowski, motion sensors
Program Notes
Antithesis (this is the part where I scream)
Antithesis (this is the part where I scream) is the embodiment of the frustration I sometimes feel dealing with a split sense of self. It explores feelings of inevitability and anger, with me grappling with the electronic sounds and fighting to retain voice as a performer amidst a thick and sometimes overwhelming texture. This piece is quite open in its structure, asking the performer to create gestures of increasing intensity until it reaches its point of climax, a full scream, until releasing that energy back into silence. This piece forgoes the use of video and is performed in near-darkness, symbolic of the internal experience it portrays.
the New Pulsar Generator
The concert features a selection of compositions conceived with the New Pulsar Generator (nuPG) program for digital sound synthesis designed by the composer Marcin Pietruszewski. The New Pulsar Generator (nuPg) is an interactive program for sound synthesis developed in SuperCollider 3 (SC3) programming language. The nuPg program produces a form of synthesis called pulsar synthesis (PS). The technique of PS operationalises the notion of rhythm with its multitemporal affordances as a system of interconnected patterns evolving on multiple timescales. The technique generates a complex hybrid of sounds across the perceptual time span between infrasonic pulsations and audio frequencies, giving rise to a broad family of musical structures: singular impulses, sequences, continuous tones, time-varying phrases, and beating textures. See: https://www.marcinpietruszewski.com/the-new-pulsar-generator
RILF
RILF (the ‘R’ stands for robot…) explores resonances in the (uncanny) valley between feminized machines and machinized femmes. Live coding in SuperCollider and TidalCycles with and alongside bespoke SID chip hardware synthesizers, I resignify signal from “Monica” (the Google AI assistant) answering psychologist Arthur Aron’s “36 questions to fall in love”; “Ann Steel” (composer Roberto Cacciapaglia’s feminized disco avatar); auto-tuned Cher; Donna Summer with an early drum machine; “Samantha” (the Apple text-to-speech voice) iterating on linguistics work by Natural Language Processing (NPL) researcher Bill MacCartney; TikTok streamer “Pinky” (Fedha Sinon) performing as a Non-Playable (videogame) Character (NPC) for the music producer Timbaland; the feminized techno avatar of Uwe Schmidt & Pete Namlook; Zelda; and the actor Scarlett Johansson singing as the character “Samantha” in the 2013 film “her” through Holly Herndon’s “Holly+” AI vocal clone.
Here comes a candle to light you to bed
The title of the piece comes from a nursery rhyme referenced in George Orwell’s book “1984”. Throughout the book the main character struggles to remember the poem’s ending, which is revealed to him at the key moment, right before he is captured: “Here comes a candle to light you to bed, here comes a chopper to chop off your head”.
This thread in the whole story resonated with me, as it touches on the volatility of one’s memory, with the backdrop of large-scale manipulation of recorded knowledge performed by the totalitarian regime in the book. While Orwell mostly deals with the memory that exists within humans and memory that’s written down, today we deal with omnipresence of recorded media of multiple sorts, particularly sounds, images, and videos. As we produce larger and larger amount of such records, not only through traditional books, audio records and movies, but also in social media, blogs, podcasts etc., I find it fascinating how we navigate this oversaturated space and how it is being transformed by both large-scale phenomena as well as targeted actions. In my piece I am seeking to explore these transformations by employing a machine learning model that embeds the memory of the piece. While being performed, the piece re-composes itself, as the model is being re-trained to embed new “memories” of the performance gestures.
This work is supported by the Department of Digital Arts and Experimental Media at the University of Washington, as well as eScience Institute with support from the Washington Research Foundation.
The piece is realized in 3rd order Ambisonics spatial sound format and employs custom motion sensors developed by the composer.