Park Echoes

Sidewalk Festival (August 2, 2025)
Eliza Howell Park


Call and Response Game
built within TouchDesigner


About


Park Echoes is an immersive audio call and response game built for Sidewalk Festival 2025 where participants were invited to collaborate to reveal a generative soundscape from Eliza Howell Park, evoking connections with other participants through sonic programs requiring interaction.

The game objective was to mimic natural sounds and prevent the soundscape from degrading into industrial ambience. As the player succeeds, the soundscape would transition to ambience and bird sounds from different locations within Eliza Howell Park. To make the experience immersive, ambisonic recordings were used and played back in a surround sound system, modified with spatial audio effects when certain game conditions were met.

Alongside the installation were three collaborative spatial audio performances from local artists: Glare, MechaNatura, and Echosystems.

For us to take actions that will have net positive outcomes for our communities (human and non-human) over a long period of time necessitates us being good listeners. Listening not only to each other, but also to the rhythms of nature; to the many organisms that are critical to the functioning of habitats, theirs and our own. One of the goals of this game was to help improve one’s capability and attention to nature through the use of ambisonic recordings and a live vocal comparison program.

____________________________________________

Table of Contents:
About
Sampling
Sample Library Organization and Processing
Scene Generation
Sample-Voice Comparison
Scene Change
Spatialization
AQI Data Processing and SFX
Sidewalk Festival Spatial Audio Stage & Event
Future Directions


Network Overview



____________________________________________

Sampling

Equipment:
  • Zoom F8 Field Recorder Kit (courtesy of NAS)
  • Sennheiser AMBEO VR 3D Microphone (courtesy of NAS)
  • Zoom H3-VR Mic (courtesy of NAS)
  • Zoom H2N Mics
  • Mosquito nets (courtesy of Ériu )

Software:
  • Izotope Rx
  • Zoom Ambisonics Player

  
The second early morning sampling

With local sound artists Ariel and Ériu we conducted two early morning recording sessions in July of 2025 (the mosquitos were incredible around this time -- thankfully Ériu is a skilled forestry teacher and came prepared with mosquito protection gear ).

We set out to capture bird sounds alongside environmental ambience and any kind of industrial sound interference.



riverside recording from the F8 reduced to binaural.



trailhead recording using the H2N in spatial mode reduced to binaural


within the wood sculpture recording from the F8 reduced to binaural.

The park is right underneath a consistent flight path. Through analysis using Izotope Rx it was surprising to see how much louder the airplane noise was over the entire ambience (spectrograms below).


clip of airplane overhead the F8.


After a few days of sampling (including one day with environmentalist Kathy) I began the sample organization process.
____________________________________________

Sample Library Organization and Processing

I sifted through long-form recordings using Rx for samples and sorted them according to categories:
  1. Birds
  2. Environment
  3. Mechanical

I then sorted the birds and environments further into sub-categories (using Merlin Bird ID to assist with identification):
Birds:
  • Black-capped Chickadee
  • Blue Jay
  • Carolina Wren
  • Catbird
  • Cedar Waxwing
  • Cricket
  • Diverse
  • Downy Woodpecker
  • Eastern Kingbird
  • Eastern Wood-Pewee
  • Goldfinch
  • Great Blue Heron
  • Grosbeak
  • House Wren
  • Indigo Bunting
  • Mallard
  • Mosquito
  • Northern Cardinal
  • Red-Eyed Vireo
  • Red-Winged Black Bird
  • Robin
  • Song Sparrow
  • Warbling Vireo
  • Wood Duck
  • Wood Thrush
Environment:
  • Across the Street From Shelter 2
  • Outside Shelter 2 Path
  • Shelter 2
  • Shelter 2 Trailhead
  • Trail Forest
  • Trail near Rainbow bridge
  • Grass Meadow
  • Skatepark
  • TrailForest
  • a bit out from theriver
  • next to river
  • outside nature river trail
  • Bike Path
  • Industrial
  • Shelter 2 Water Droplets
  • Shelter 3
  • Wood Sculpture

I removed unwanted noise out of range from the birds frequencies to make matching more accurate using Rx.

I would sometimes apply the Rx dewind or denoise functions, but found that usually the hipass filter alone was enough.

This meant all of the signal below ~1.2 kHz I would delete, giving the following as a result after gain-staging.

song sparrow spectrogram processing

I then cut smaller clips from the longer samples, here are some below with their associated spectrograms

song sparrow clip1 & spectrogram

song sparrow clip2 & spectrogram

song sparrow clip3 & spectrogram

Overall, this process of analyzing birdsong spectrograms was very rewarding, particularly in seeing how much variation birdsongs take. I found the song sparrow spectrogram to be so interesting in how complex and digital they seem, even drawing what seem like birds!

While amassing and processing a healthy amount of samples, I was simultaneously working within TouchDesigner to develop a method to randomly grab a bird sample to pair with an environment ambience upon a scene change. 
____________________________________________

Scene Generation


I worked to develop a system that would unpack each sample category’s folders’ files.



Below , I used a select DAT which would grab a file name from the parent folder DAT (above ) according to row index parent().digits. The audiofilein CHOP would then grab the file name from that select DAT and I would output a gain-controlled audio stream as well as sample information, including the currently selected sample.



I then packed this module as a base and replicated it according to how many subfolders this category has (shown below), making each replicant (test#) a Base COMP containing one sample with all of its associated info.

I then created two Base COMPs, sampleSelect, and infoSelect, which would grab the audio and info from each base, and switch between them according to op('null1')['sampleChange'] shown below. Yet the complication arose that each folder would have different samples, so I aimed to normalize the sample selection, where a value of 0 would be the first sample of the folder, and a value of 1 would be the last sample of the folder, despite how many samples exist in the folder.

Thus the sampleChange value was derived from constant2, which was bound to parent parameter ‘Index’ (0 - 1), and passed through math1, which reranged from [0 to 1] to [0 to the number of rows within the folder DAT] with the expression (op('info1')['num_rows']-2). 


Within the sampleSelect and infoSelect Base COMPs are simply many select CHOPs that are assigned to the out values of each test# replicant shown above feeding into their respective switches (shown below).


I could have had the replicants feed directly into two switch CHOPs without re-selecting the respective audio and info feeds, so why create the sampleSelect and infoSelect Base COMPs?


The answer being a workaround to the Replicator COMP -- whenever Replicator COMP refreshes or recreates its replicants, all of the CHOP out connections from the replicants are broken, so I needed a way to prevent that connection from breaking each time I refreshed the sample folder unpacking. By using the select CHOP for each replicant (or overcompensating with many select CHOPs) I am able to keep that connection stable -- the only caveat being that I would need to have at least as many select CHOPs as there are samples in the base. There may be a more elegant solution to this.

Now that I could unpack one species’ folder - I wanted to scale it up to address every species’ folder and be able to easily switch between, so I would ultimately have two indices to work with, one for species and another for sample within the respective folder.

This is straight-forward to use another replicator COMP:


the sceneSelector above is functioning the same way as the sampleSelector, and has scene (species / location) and subscene (sample) control. Now it makes sense to get to how the scenes / subscenes are selected, but first I’ll introduce how a scene change happens - through the sample-voice comparison program.

____________________________________________

Sample-Voice Comparison

Initially I had high hopes for a comparison module that would accurately compare two samples and address similarity. I came across a Teachable Machine port to TD that Torin Blankensmith developed, yet soon realized the limitations of model training through the program (I would need to use a work-around to import my own samples using audio loopback through Voicemeter). Thinking about the breadth of samples that I wanted to include, I wanted to bookshelf this approach for a simpler and scalable one.

teachable machine model training

I thought that if I could compare just the top frequency bins of the sample to that of an incoming microphone, then that would at least get me to a rough comparing node.

Using CHOPs I divided up the frequency spectrum of the incoming sample files into bins using filters. To make this quicker I used a Replicator COMP to separate the filters by their parent.digits() to the second power.

/HzComp/AutoGain/BPBinning1/Filter0

/HzComp/AutoGain/BPBinning1


/HzComp/AutoGain/BPBinning1 (zoomed in)

The filtered frequencies (bins) were merged and output from the COMP.

This was run through a makeshift autogain adjusting feedback system that would balance the incoming audio gain with the sample. 

/HzComp/AutoGain


This COMP (AutoGain1) containing the binned frequencies would then be compared with that of the incoming microphone audio (audiodev).

/HzComp


Here are the steps of the HzComp above.

  1. the incoming sample file would be summed from ambisonic to mono, then passed through the frequency binning and autogain functions.
  2. the binned frequencies would pass through a Trail CHOP followed by Analyze to capture the maximum values of the trail window.
  3. the values of frequency bins from analyze4 are then reordered according to value, and only the two highest values are kept through a Delete CHOP.
  4. to match up the referenceBins null with the frequencies analyzed by the incoming audio from the microphone I used a chopto DAT and selected op('chopto1')[0,0], op('chopto1')[0,1].
  5. the difference of referenceBins with select2 was taken via math4 (subtraction). 
  6. this value was sent through a Logic CHOP in bound mode, where the bounds were defined by the slider op('slider1').par.value0/-5 to op('slider1').par.value0/5.
  7. the output of logic5 would give 0’s or 1’s for the two channels, which I multiplied together with math1 to give an output of 1 or 0, indicating that the two most prominent frequency bins of the sample are the same to that of the microphone input.
  8. this was then run through trail6 and analyze2 in RMS Power mode to make it so that the matching would have to be sustained for at least 4s. This was then renamed to ‘match’ and output from the HzComp COMP.



now that the matching condition has been developed, we have our event trigger to instruct the scene changer. 

____________________________________________

Scene Changing


A simple scene change would be to take that incoming match value and cause the SampleLibrary’s sceneSelector to choose from a random subscene and scene. 


this is essentially what is happening here, where logic1 is looking for a value that is exactly equal to 1 and upon activation in scenetrigger1, the chopexec1 will dish out random values while on: 

    op('constant1').par.const0value = random.randrange(0,18) [for the scene]
    op('constant3').par.const0value = random.randrange(0,23) [for the subscene]
    op('constant7').par.const0value = random.randrange(0, 2154) [this is for the AQI data timepoint]

the random value in constant1 (below) would pass through trail1 to generate a scene history (which wasn’t used in this version), the value of the current scene is looked up, rounded to an integer with math2, and fed directly into a delayed and non-delayed sceneChange, which controls both environment and bird sample scenes. The non-delayed change was necessary to have a cued up environment / bird sample ready which would be introduced through crossfading and spatial effects (explained in spatialization).

The subcsceneChange null simply receives a delayed random value from chopexec1’s python script.


Industrialization goal: after 60 seconds of no matches, the environment is set to industrialize.
   
method: After a match is made in analyze2, count1 resets and begins a timer (set to seconds by math8) and logic4 is only true when the count is 60 seconds. That value is then sent through trail3 which is then analyzed for the maximum value, added with analyze2, and lagged to get industrialize which causes a gain increase in math10 below. 



____________________________________________

Spatialization


I used the free open-source IEM spatialization VSTs to create a spatially interesting transition between ambisonic samples by mapping the azimuth of each channel to quadParams and crossing between the LFO values during the transition sequence. 



I made the LFOs in the incoming and outgoing environment samples 180 out of phase with the idea to give the illusion of the outgoing environment being swallowed up into a single point and the outgoing environment expanding out from the opposite side of the soundscape.

____________________________________________

AQI Data Processing and SFX

Thanks to some friends at Ecology Center (s/o Kristy and Salam) I had received AQI data from Eliza Howell Park in July 2025. Initially I wanted to integrate this more closely with the sample collection dates to create an added layer of environmental representation to the soundscape. Yet given time constraints and limited sample dates I opted to get more variation through selecting a different AQI for each scene change.

To do this the AQI data in the table DAT in the top left of the image below is converted to CHOP data, clamped from 5 to 500 to remove any negative data errors, then brought into a single channel of data through a shuffle which is then lookedup by a normalized value from constant7, which got its random value from chopexec1. 

That AQI value was then converted to a sine wave frequency by multiplying by 10 and the matching amount, leading to an increasing frequency wave whose max value correlates with the AQI from that day. After a match a random day is chosen and that AQI is then used as the resulting frequency for the sine tone. 


This AQI-dependent sine wave was coupled with a successful match and industrialization sfx:

here I am replicating oscillators with varying frequencies according to their parent.digits(). Each oscillator is triggered with a delay also defined by parent.digits(). 


Match Success Sound

Finally, here is the full demo:



Event





Park Echoes promo

Timetable:
🌀 Start – 3:30 PM: Park Echoes (call and response audio game)
🌞 4:20 – 5:00 PM: Glare (Gallons x Cherriel)
🌐 5:00 – 6:00 PM: MechaNatura [mechanatura.com]
🌱 6:00 – Close: Echosystems

On the sunny morning of August 2nd, Ethan (Weather Citizen) and I gathered Neighborhood Art School’s (NAS) Fieldspeakers system ​(7.1.4) and aimed to deploy it at 42.39399, -83.27245 in Eliza Howell Park. 

The site was shared by Cyrah Dardas’ large metallic mobile hanging from a tree and The (Re)Claim Series’ ceramic installation and activation.

We came across some hurdles, mainly that there was a cherry picker in the zone we were trying to set up in!


After it was moved we got the 12 speakers up and running. Billy also came in clutch with the H3-VR recorder. 

A few participants tried out the game with some wireless mics and we got some matches!

We then carried on to the scheduled programming.

Glare (Gallons x Cherriel) started it out with four machines -- a TC Helicon, Tetra, Digitakt, and an fx box which shifted in and out of defined rhythms and melted ambience. MechaNatura then took us into an entrancing analog noise-field with boutique synths and a mobile modular rig, and Echosystems (Ethan and I) closed it out with some delicate computer textures combined with improvised percussive elements and soundscapes. All of the works were operating through Ethan’s laptop using a Motu 16A (thanks Indy). 


pc Livinformedia

After all of the performances were done we packed up Ethan’s van and made our way back to NAS to drop off the speakers.

It was such a beautiful day and experience to facilitate an immersive sound stage in this park - playing back the samples I gathered in that space in a spatial format. The influence of this project was largely because of a project led by NAS and TERQ entitled ‘Sound Travels’ - which studied the effect of spatial audio on learning. This was eventually translated through NAS and Billy Mark into a mobile spatial sound rig meant for outdoor spatial audio listening. Thanks Billy! This was also one of many events over the spring / summer which featured the Fieldspeakers.


pc Livinformedia

____________________________________________


Future Directions:



Now that I developed the bones of this game I’m hoping to bring it out again - maybe in 2026 (hmu if you’re interested)!

I’m hoping to add more immersivity and interactive elements: 
  • reward functions for correct matches
  • scene history
  • visual UI for engagement
  • alternative matching functions




____________________________________________

Thank you to:

Eliza Howell Park
Sidewalk Fest
Neighborhood Art School
Ariel and Ériu
Kathy
Ethan
Glare
MechaNatura
Ecology Center (Kristy and Salam)
Special thanks to Augusta and Sophiyah E. for everything!