<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:media="http://search.yahoo.com/mrss/"
	>

<channel>
	<title>Maro Kariya</title>
	<link>https://marokariya.info</link>
	<description>Maro Kariya</description>
	<pubDate>Sat, 20 Dec 2025 22:46:34 +0000</pubDate>
	<generator>https://marokariya.info</generator>
	<language>en</language>
	
		
	<item>
		<title>ParkEchoes</title>
				
		<link>http://marokariya.info/ParkEchoes</link>

		<comments></comments>

		<pubDate>Sat, 20 Dec 2025 22:46:34 +0000</pubDate>

		<dc:creator>Maro Kariya</dc:creator>
		
		<category><![CDATA[]]></category>

		<guid isPermaLink="false">458767</guid>

		<description>Park EchoesSidewalk Festival (August 2, 2025)
Eliza Howell ParkCall and Response Game 
built within TouchDesigner

AboutPark Echoes is an 

immersive audio call and response game

 built for 

Sidewalk Festival 2025 

where participants were invited to collaborate to reveal a generative soundscape from Eliza Howell Park, evoking connections with other participants through sonic programs requiring interaction.

The game objective was to mimic natural sounds and prevent the soundscape from degrading into industrial ambience. As the player succeeds, the soundscape would transition to ambience and bird sounds from different locations within Eliza Howell Park. To make the experience immersive, ambisonic recordings were used and played back in a surround sound system, modified with spatial audio effects when certain game conditions were met. 



Alongside the installation were three collaborative spatial audio performances from local artists: Glare, MechaNatura, and Echosystems.



For us to take actions that will have net positive outcomes for our communities (human and non-human) over a long period of time necessitates us being good listeners. Listening not only to each other, but also to the rhythms of nature; to the many organisms that are critical to the functioning of habitats, theirs and our own. One of the goals of this game was to help improve one’s capability and attention to nature through the use of ambisonic recordings and a live vocal comparison program.


____________________________________________
Table of Contents:


About



Sampling
Sample Library Organization and Processing





Scene Generation


Sample-Voice Comparison


Scene Change
Spatialization AQI Data Processing and SFX
Sidewalk Festival Spatial Audio Stage &#38;amp; Event

Future Directions
Network Overview
&#60;img width="2560" height="1410" width_o="2560" height_o="1410" src_o="https://cortex.persona.co/t/original/i/261fb2563caf191235d0c2589c790bf216e377926a7cdfb2732f08282c6f035e/Network_overview.png" data-mid="1427591" border="0" /&#62;

____________________________________________


Sampling


Equipment:
Zoom F8 Field Recorder Kit 
(courtesy of NAS)

Sennheiser AMBEO VR 3D Microphone 
(courtesy of NAS)

Zoom H3-VR Mic 
(courtesy of NAS)

Zoom H2N MicsMosquito nets (courtesy of 

Ériu )
Software:
Izotope RxZoom Ambisonics Player



&#60;img width="4032" height="3024" width_o="4032" height_o="3024" src_o="https://cortex.persona.co/t/original/i/51d5179a5842451d02c273edc6974e9d46f09b8536ec7910fb180049e81f72f9/IMG_9777.jpg" data-mid="1427676" border="0" data-scale="44"/&#62;&#38;nbsp;&#60;img src="https://github.com/otodojo/TouchDesigner-Projects/blob/main/Park%20Echoes%202025/Documentation/Images/IMG_9654_ArielEriuGroupPhoto.jpg?raw=true" width="25%"&#62;&#38;nbsp;&#60;img src="https://github.com/otodojo/TouchDesigner-Projects/blob/main/Park%20Echoes%202025/Documentation/Images/IMG_7906.jpg?raw=true" width="25%"&#62;


The second early morning sampling



With local sound artists Ariel and Ériu we conducted two early morning recording sessions in July of 2025 (the mosquitos were incredible around this time -- thankfully 

Ériu is a skilled forestry teacher and came prepared with mosquito protection gear

).
We set out to capture bird sounds alongside environmental ambience and any kind of industrial sound interference.




&#60;img src="https://github.com/otodojo/TouchDesigner-Projects/blob/main/Park%20Echoes%202025/Documentation/Images/NearCreekpano.jpg?raw=true" width="100%"&#62;






riverside recording from the F8 reduced to binaural.


&#60;img src="https://github.com/otodojo/TouchDesigner-Projects/blob/main/Park%20Echoes%202025/Documentation/Images/IMG_9782_ptrailpanorama.jpg?raw=true" width="100%"&#62;


trailhead recording using the H2N in spatial mode reduced to binaural






&#60;img src="https://github.com/otodojo/TouchDesigner-Projects/blob/main/Park%20Echoes%202025/Documentation/Images/IMG_9786_woodsculpture.jpg?raw=true" width="100%"&#62;










within the wood sculpture 
recording from the F8 reduced to binaural.




The park is right underneath a consistent flight path. Through analysis using Izotope Rx it was surprising to see how much louder the airplane noise was over the entire ambience (spectrograms below).



clip of airplane overhead the F8.




&#60;img src="https://github.com/otodojo/TouchDesigner-Projects/blob/main/Park%20Echoes%202025/Documentation/Images/airplane-spectrogram.JPG?raw=true" width="100%"&#62;

&#60;img src="https://github.com/otodojo/TouchDesigner-Projects/blob/main/Park%20Echoes%202025/Documentation/Images/airplane-spectrogram-zoomed.JPG?raw=true" width="100%"&#62;



After a few days of sampling (including one day with environmentalist Kathy) I began the sample organization process.
____________________________________________



 Sample Library Organization and ProcessingI sifted through long-form recordings using Rx for samples and sorted them according to categories:
BirdsEnvironmentMechanicalI then sorted the birds and environments further into sub-categories (using Merlin Bird ID to assist with identification):

	

Birds:

Black-capped ChickadeeBlue JayCarolina WrenCatbirdCedar WaxwingCricketDiverseDowny WoodpeckerEastern KingbirdEastern Wood-PeweeGoldfinchGreat Blue HeronGrosbeakHouse WrenIndigo BuntingMallardMosquitoNorthern CardinalRed-Eyed VireoRed-Winged Black BirdRobinSong SparrowWarbling VireoWood DuckWood Thrush


	

Environment:
Across the Street From Shelter 2Outside Shelter 2 PathShelter 2Shelter 2 TrailheadTrail ForestTrail near Rainbow bridgeGrass MeadowSkateparkTrailForesta bit out from therivernext to riveroutside nature river trailBike PathIndustrialShelter 2 Water DropletsShelter 3Wood Sculpture

I removed unwanted noise out of range from the birds frequencies to make matching more accurate using Rx.


I would sometimes apply the Rx dewind or denoise functions, but found that usually the hipass filter alone was enough.




This meant all of the signal below ~1.2 kHz I would delete, giving the following as a result after gain-staging. 



	&#60;img width="2290" height="1104" width_o="2290" height_o="1104" src_o="https://cortex.persona.co/t/original/i/68cb78dbe2bbe2d4c95e10ab2bc3624fbbca6c9a070451b76a46e01f5c238082/songsparrow_A_before.JPG" data-mid="1427670" border="0" /&#62;&#60;img width="2287" height="1090" width_o="2287" height_o="1090" src_o="https://cortex.persona.co/t/original/i/4c928bcb10fbaefe09a3941eab750e5e0709ad2ee8934017235d1ba36c6e2dc5/songsparrow_A_after.JPG" data-mid="1427671" border="0" /&#62;
	&#60;img width="2241" height="1096" width_o="2241" height_o="1096" src_o="https://cortex.persona.co/t/original/i/6319804a6a3484ccaeea97f613e960f11cfab216688aa8ecfa50311fe0728eea/songsparrow_B_after.JPG" data-mid="1427674" border="0" /&#62;&#60;img width="2259" height="1100" width_o="2259" height_o="1100" src_o="https://cortex.persona.co/t/original/i/e8fc9f0d090a1dda5222734bf8af5ab0734f5797945b6c12528d7cb3741dc265/songsparrow_B_before.JPG" data-mid="1427675" border="0" /&#62;
song 




sparrow spectrogram processing

I then cut smaller clips from the longer samples, here are some below with their associated spectrograms


&#60;img src="https://github.com/otodojo/TouchDesigner-Projects/blob/main/Park%20Echoes%202025/Documentation/Images/songsparrow8.JPG?raw=true" width="100%"&#62;
song sparrow clip1 &#38;amp; spectrogram

&#60;img src="https://github.com/otodojo/TouchDesigner-Projects/blob/main/Park%20Echoes%202025/Documentation/Images/songsparrow9.JPG?raw=true" width="100%"&#62;


song sparrow clip2 &#38;amp; spectrogram



&#60;img src="https://github.com/otodojo/TouchDesigner-Projects/blob/main/Park%20Echoes%202025/Documentation/Images/songsparrow10.JPG?raw=true" width="100%"&#62;




song sparrow clip3 &#38;amp; spectrogram




Overall, this process of analyzing birdsong spectrograms was very rewarding, particularly in seeing how much variation birdsongs take. I found the song sparrow spectrogram to be so interesting in how complex and digital they seem, even drawing what seem like birds!
 While amassing and processing a healthy amount of samples, I was simultaneously working within 

TouchDesigner to develop a method to randomly grab a bird sample to pair with an environment ambience upon a scene change.&#38;nbsp;

____________________________________________
 Scene GenerationI worked to develop a system that would unpack each sample category’s folders’ files. 
&#60;img width="1326" height="567" width_o="1326" height_o="567" src_o="https://cortex.persona.co/t/original/i/9965f06566213d0d8ac831850a0578630fa9d4eabe6e0b8689a467d5ac82faac/foldermaster-withfolder.JPG" data-mid="1427697" border="0" data-scale="100"/&#62;

Below , I used a select DAT which would grab a file name from the parent folder DAT (above ) according to row index parent().digits. The audiofilein CHOP would then grab the file name from that select DAT and I would output a gain-controlled audio stream as well as sample information, including the currently selected sample. 


&#60;img width="1557" height="917" width_o="1557" height_o="917" src_o="https://cortex.persona.co/t/original/i/5e1d5aa57d52987402bf0b8ccef95bf213c1327ace656432d9ed183d1e461fec/filemaster.JPG" data-mid="1427685" border="0" /&#62;


I then packed this module as a base and replicated it according to how many subfolders this category has (shown below), making each replicant (test#) a Base COMP containing one sample with all of its associated info.

I then created two Base COMPs, sampleSelect, and infoSelect, which would grab the audio and info from each base, and switch between them according to op('null1')['sampleChange'] shown below. Yet the complication arose that each folder would have different samples, so I aimed to normalize the sample selection, where a value of 0 would be the first sample of the folder, and a value of 1 would be the last sample of the folder, despite how many samples exist in the folder. 

Thus the sampleChange value was derived from constant2, which was bound to parent parameter ‘Index’ (0 - 1), and passed through math1, which reranged from [0 to 1] to [0 to the number of rows within the folder DAT] with the expression (op('info1')['num_rows']-2).&#38;nbsp;




&#60;img width="1759" height="876" width_o="1759" height_o="876" src_o="https://cortex.persona.co/t/original/i/c55c55dea2d7b722969c51e7a1ec9e1f2743f9f522fe071f0b7beb36473a4c48/foldermaster.JPG" data-mid="1427700" border="0" /&#62;
Within the sampleSelect and infoSelect Base COMPs are simply many select CHOPs that are assigned to the out values of each test# replicant shown above feeding into their respective switches (shown below). 



&#60;img width="1343" height="889" width_o="1343" height_o="889" src_o="https://cortex.persona.co/t/original/i/bfe7160f4aa9b16c2040a7062ac26deeb7eb60e225af54daf7daaa655906c364/sampleSelect.JPG" data-mid="1427699" border="0" /&#62;I could have had the replicants feed directly into two switch CHOPs without re-selecting the respective audio and info feeds, so why create the sampleSelect and infoSelect Base COMPs?The answer being a workaround to the Replicator COMP -- whenever Replicator COMP refreshes or recreates its replicants, all of the CHOP out connections from the replicants are broken, so I needed a way to prevent that connection from breaking each time I refreshed the sample folder unpacking. By using the select CHOP for each replicant (or overcompensating with many select CHOPs) I am able to keep that connection stable -- the only caveat being that I would need to have at least as many select CHOPs as there are samples in the base. There may be a more elegant solution to this.Now that I could unpack one species’ folder - I wanted to scale it up to address every species’ folder and be able to easily switch between, so I would ultimately have two indices to work with, one for species and another for sample within the respective folder.
This is straight-forward to use another replicator COMP:
&#60;img width="1012" height="817" width_o="1012" height_o="817" src_o="https://cortex.persona.co/t/original/i/e48ccab9e987126440cbae05ca7664531cee57293b0e7ebb29090f3e467d95b6/sampleLibrary_1.JPG" data-mid="1427727" border="0" /&#62;
the sceneSelector above is functioning the same way as the sampleSelector, and has scene (species / location) and subscene (sample) control. Now it makes sense to get to how the scenes / subscenes are selected, but first I’ll introduce how a scene change happens - through the sample-voice comparison program.


____________________________________________



Sample-Voice ComparisonInitially I had high hopes for a comparison module that would accurately compare two samples and address similarity. I came across a Teachable Machine 

port to TD that

Torin Blankensmith developed, yet soon realized the limitations of model training through the program (I would need to use a work-around to import my own samples using audio loopback through Voicemeter). Thinking about the breadth of samples that I wanted to include, I wanted to bookshelf this approach for a simpler and scalable one.

&#60;img width="2096" height="1252" width_o="2096" height_o="1252" src_o="https://cortex.persona.co/t/original/i/f4a17b49405e1f2cf04082e8d797ae8d51c33b762bb68c537cc33616c5f23aa4/teachablemachine.JPG" data-mid="1427613" border="0" /&#62;teachable machine model training
I thought that if I could compare just the top frequency bins of the sample to that of an incoming microphone, then that would at least get me to a rough comparing node.
Using CHOPs I divided up the frequency spectrum of the incoming sample files into bins using filters. To make this quicker I used a Replicator COMP to separate the filters by their parent.digits() to the second power.
/HzComp/AutoGain/BPBinning1/Filter0
&#60;img width="1873" height="924" width_o="1873" height_o="924" src_o="https://cortex.persona.co/t/original/i/b30aad66a3ff03a6193bda93dde900200156f461271f128fe1c5944b925e43a7/filter.JPG" data-mid="1427650" border="0" data-scale="75"/&#62;


/HzComp/AutoGain/BPBinning1
&#60;img width="1010" height="1058" width_o="1010" height_o="1058" src_o="https://cortex.persona.co/t/original/i/58c42d324c19ae9ff3432af3128d7cac1181aac33c3c0af54a9e666f50476c94/frequencybinning-deeper.JPG" data-mid="1427612" border="0" /&#62;

/HzComp/AutoGain/BPBinning1 (zoomed in)
&#60;img width="1953" height="316" width_o="1953" height_o="316" src_o="https://cortex.persona.co/t/original/i/31d5fb8be57ddb410e7e8aea850293483107c65b6a9dd1ce9935636a147c8370/merged-filters.JPG" data-mid="1427651" border="0" /&#62;
The filtered frequencies (bins) were merged and output from the COMP.
This was run through a makeshift autogain adjusting feedback system that would balance the incoming audio gain with the sample.&#38;nbsp;


/HzComp/AutoGain
&#60;img width="1475" height="816" width_o="1475" height_o="816" src_o="https://cortex.persona.co/t/original/i/ddf60221061623aea84a5dc4a320804d25d6d5573cbf751ae81f9aa6607b42c4/autogain.JPG" data-mid="1427611" border="0" /&#62;


This COMP (AutoGain1) containing the binned frequencies would then be compared with that of the incoming microphone audio (audiodev).

/HzComp&#60;img width="1989" height="477" width_o="1989" height_o="477" src_o="https://cortex.persona.co/t/original/i/7822b72ab2c12297c718966cabb91c30f204783819be4dc60b84fa7a7f1834b3/frequency-binning-hr.jpg" data-mid="1427610" border="0" /&#62;



Here are the steps of the HzComp above.
the incoming sample file would be summed from ambisonic to mono, then passed through the frequency binning and autogain functions.

the binned frequencies would pass through a Trail CHOP followed by Analyze to capture the maximum values of the trail window. 

the values of frequency bins from analyze4 are then reordered according to value, and only the two highest values are kept through a Delete CHOP.to match up the referenceBins null with the frequencies analyzed by the incoming audio from the microphone I used a chopto DAT and selected op('chopto1')[0,0], op('chopto1')[0,1].the difference of referenceBins with select2 was taken via math4 (subtraction).&#38;nbsp;this value was sent through a Logic CHOP in bound mode, where the bounds were defined by the slider op('slider1').par.value0/-5 to op('slider1').par.value0/5.the output of logic5 would give 0’s or 1’s for the two channels, which I multiplied together with math1 to give an output of 1 or 0, indicating that the two most prominent frequency bins of the sample are the same to that of the microphone input.this was then run through trail6 and analyze2 in RMS Power mode to make it so that the matching would have to be sustained for at least 4s. This was then renamed to ‘match’ and output from the HzComp COMP.









&#60;img width="674" height="278" width_o="674" height_o="278" src_o="https://cortex.persona.co/t/original/i/0234012beeb9e0109933ef5585f17ed0625b4b081118e8e43460943897a3219a/ParkEchoes_Doc.3.jpg" data-mid="1427608" border="0" /&#62;now that the matching condition has been developed, we have our event trigger to instruct the scene changer.&#38;nbsp;


____________________________________________
 Scene Changing
A simple scene change would be to take that incoming match value and cause the SampleLibrary’s sceneSelector to choose from a random subscene and scene.&#38;nbsp;
&#60;img width="1163" height="1029" width_o="1163" height_o="1029" src_o="https://cortex.persona.co/t/original/i/e127323e8b801dba4487feeba1531813a20163ac9f120f52b453599ddeb9a5df/scenetrigger.JPG" data-mid="1427743" border="0" /&#62;
this is essentially what is happening here, where logic1 is looking for a value that is exactly equal to 1 and upon activation in scenetrigger1, the chopexec1 will dish out random values while on:&#38;nbsp;&#38;nbsp; &#38;nbsp; op('constant1').par.const0value = random.randrange(0,18) [for the scene]
&#38;nbsp; &#38;nbsp; op('constant3').par.const0value = random.randrange(0,23) [for the subscene]&#38;nbsp; &#38;nbsp; op('constant7').par.const0value = random.randrange(0, 2154) [this is for the AQI data timepoint]
the random value in constant1 (below) would pass through trail1 to generate a scene history (which wasn’t used in this version), the value of the current scene is looked up, rounded to an integer with math2, and fed directly into a delayed and non-delayed sceneChange, which controls both environment and bird sample scenes. The non-delayed change was necessary to have a cued up environment / bird sample ready which would be introduced through crossfading and spatial effects (explained in spatialization).
The subcsceneChange null simply receives a delayed random value from chopexec1’s python script.

&#60;img width="2265" height="1057" width_o="2265" height_o="1057" src_o="https://cortex.persona.co/t/original/i/e834d3033b0e4480b3afad0b78bab735c2f2447346bc589917ca0af3c71dd6d1/scenechanglogic.JPG" data-mid="1427744" border="0" /&#62;
Industrialization goal: after 60 seconds of no matches, the environment is set to industrialize.&#38;nbsp; &#38;nbsp; method: After a match is made in analyze2, count1 resets and begins a timer (set to seconds by math8) and logic4 is only true when the count is 60 seconds. That value is then sent through trail3 which is then analyzed for the maximum value, added with analyze2, and lagged to get industrialize which causes a gain increase in math10 below.&#38;nbsp;

&#60;img width="665" height="233" width_o="665" height_o="233" src_o="https://cortex.persona.co/t/original/i/2f6fe971b94e6fb95658944114cf417eda85fb5b1d4651efc280f7a5f722bbf9/industrialswitch.JPG" data-mid="1427789" border="0" /&#62;


____________________________________________





 SpatializationI used the free open-source IEM spatialization VSTs to create a spatially interesting transition between ambisonic samples by mapping the azimuth of each channel to quadParams and crossing between the LFO values during the transition sequence.&#38;nbsp;

    
    Your browser does not support the video tag.
I made the LFOs in the incoming and outgoing environment samples 180 out of phase with the idea to give the illusion of the outgoing environment being swallowed up into a single point and the outgoing environment expanding out from the opposite side of the soundscape.

____________________________________________





AQI 

Data Processing 

and SFXThanks to some friends at Ecology Center (s/o 
Kristy and Salam) I had received AQI data from Eliza Howell Park in July 2025. Initially I wanted to integrate this more closely with the sample collection dates to create an added layer of environmental representation to the soundscape. Yet given time constraints and limited sample dates I opted to get more variation through selecting a different AQI for each scene change.
To do this the AQI data in the table DAT in the top left of the image below is converted to CHOP data, clamped from 5 to 500 to remove any negative data errors, then brought into a single channel of data through a shuffle which is then lookedup by a normalized value from constant7, which got its random value from chopexec1.&#38;nbsp;
That AQI value was then converted to a sine wave frequency by multiplying by 10 and the matching amount, leading to an increasing frequency wave whose max value correlates with the AQI from that day. After a match a random day is chosen and that AQI is then used as the resulting frequency for the sine tone.&#38;nbsp;
&#60;img width="1322" height="711" width_o="1322" height_o="711" src_o="https://cortex.persona.co/t/original/i/17fb1ea5187b22783f9de9e0ff3c65579d9ab9bc845403df79309cbe8ab7b6b8/AQI.JPG" data-mid="1427793" border="0" /&#62;This AQI-dependent sine wave was coupled with a successful match and industrialization sfx:&#60;img width="1280" height="720" width_o="1280" height_o="720" src_o="https://cortex.persona.co/t/original/i/74243d4260102af7d6a55dcd652db5ec98786061871e2aeb8ef1f757700ca319/ParkEchoes_Doc.2.jpg" data-mid="1427592" border="0" /&#62;

here I am replicating oscillators with varying frequencies according to their parent.digits(). Each oscillator is triggered with a delay also defined by parent.digits().&#38;nbsp;



Match Success SoundFinally, here is the full demo:



 Event








Park Echoes promo



Timetable:🌀 Start – 3:30 PM: Park Echoes (call and response audio game)🌞 4:20 – 5:00 PM: Glare (Gallons x Cherriel)🌐 5:00 – 6:00 PM: MechaNatura [mechanatura.com]🌱 6:00 – Close: Echosystems 




On the sunny morning of August 2nd, Ethan (Weather Citizen) and I gathered Neighborhood Art School’s (NAS) Fieldspeakers system ​(7.1.4) and aimed to deploy it at 42.39399, -83.27245 in Eliza Howell Park.&#38;nbsp;
The site was shared by Cyrah Dardas’ large metallic mobile hanging from a tree and The (Re)Claim Series’ ceramic installation and activation.
We came across some hurdles, mainly that there was a cherry picker in the zone we were trying to set up in!
&#60;img width="3024" height="4032" width_o="3024" height_o="4032" src_o="https://cortex.persona.co/t/original/i/62a5f5ff8b5d27c22e26161d17206f3ed3e339a7326e59890cc0923b36a128f9/IMG_0003.jpg" data-mid="1427840" border="0" data-scale="43"/&#62;
After it was moved we got the 12 speakers up and running. Billy also came in clutch with the H3-VR recorder.&#38;nbsp;
A few participants tried out the game with some wireless mics and we got some matches! We then carried on to the scheduled programming.
Glare (Gallons x Cherriel) started it out with four machines -- a TC Helicon, Tetra, Digitakt, and an fx box which shifted in and out of defined rhythms and melted ambience. MechaNatura then took us into an entrancing analog noise-field with boutique synths and a mobile modular rig, and Echosystems (Ethan and I) closed it out with some delicate computer textures combined with improvised percussive elements and soundscapes. All of the works were operating through Ethan’s laptop using a Motu 16A (thanks Indy).&#38;nbsp;


&#60;img src="https://github.com/otodojo/TouchDesigner-Projects/blob/main/Park%20Echoes%202025/Documentation/Images/DSC03113_websize.jpg?raw=true" width="100%"&#62;
pc LivinformediaAfter all of the performances were done we packed up Ethan’s van and made our way back to NAS to drop off the speakers.It was such a beautiful day and experience to facilitate an immersive sound stage in this park - playing back the samples I gathered in that space in a spatial format. The influence of this project was largely because of a project led by NAS and TERQ entitled ‘Sound Travels’ - which studied the effect of spatial audio on learning. This was eventually translated through NAS and Billy Mark into a mobile spatial sound rig meant for outdoor spatial audio listening. Thanks Billy! This was also one of many events over the spring / summer which featured the Fieldspeakers.


&#60;img src="https://github.com/otodojo/TouchDesigner-Projects/blob/main/Park%20Echoes%202025/Documentation/Images/DSC03094_websize.jpg?raw=true" width="25%"&#62;
pc Livinformedia




____________________________________________


Future Directions:
Now that I developed the bones of this game I’m hoping to bring it out again - maybe in 2026 (hmu if you’re interested)!
I’m hoping to add more immersivity and interactive elements:&#38;nbsp;
reward functions for correct matchesscene historyvisual UI for engagementalternative matching functions
&#60;img width="3024" height="4032" width_o="3024" height_o="4032" src_o="https://cortex.persona.co/t/original/i/b28041b09819366323473c9fa83d8783834a9af51f8611875a02a3d2e2cb0870/IMG_9675.jpg" data-mid="1427842" border="0" data-scale="25"/&#62;

____________________________________________



Thank you to:
Eliza Howell Park
Sidewalk Fest
Neighborhood Art School
Ariel and&#38;nbsp;Ériu
 Kathy
Ethan
Glare
MechaNatura
Ecology Center (Kristy and Salam)
Special thanks to Augusta and Sophiyah E. for everything!



					    		</description>
		
		<excerpt>Park EchoesSidewalk Festival (August 2, 2025) Eliza Howell ParkCall and Response Game  built within TouchDesigner  AboutPark Echoes is an   immersive audio call and...</excerpt>

		<!--<wfw:commentRss></wfw:commentRss>-->

	</item>
		
		
	<item>
		<title>METAMORPH</title>
				
		<link>http://marokariya.info/METAMORPH</link>

		<comments></comments>

		<pubDate>Tue, 04 Feb 2025 05:46:26 +0000</pubDate>

		<dc:creator>Maro Kariya</dc:creator>
		
		<category><![CDATA[]]></category>

		<guid isPermaLink="false">455776</guid>

		<description>
Sound Design &#38;amp; Scoring
more info:www.wearekrater.org/sarahwondrack
</description>
		
		<excerpt>Sound Design &#38;amp; Scoring more info:www.wearekrater.org/sarahwondrack</excerpt>

		<!--<wfw:commentRss></wfw:commentRss>-->

	</item>
		
		
	<item>
		<title>MediapipeWorkshop</title>
				
		<link>http://marokariya.info/MediapipeWorkshop</link>

		<comments></comments>

		<pubDate>Tue, 04 Feb 2025 05:02:17 +0000</pubDate>

		<dc:creator>Maro Kariya</dc:creator>
		
		<category><![CDATA[]]></category>

		<guid isPermaLink="false">455775</guid>

		<description>Cranbrook 4D Department TouchDesigner + MediaPipe Workshop
by Maro Kariya
January 29th, 2024
Workshop Instructions        View this post on Instagram            A post shared by Carla Diana (@carladiana)


Application at Dawat</description>
		
		<excerpt>Cranbrook 4D Department TouchDesigner + MediaPipe Workshop by Maro Kariya January 29th, 2024 Workshop Instructions        View this post on Instagram            A...</excerpt>

		<!--<wfw:commentRss></wfw:commentRss>-->

	</item>
		
		
	<item>
		<title>Artist Profiles</title>
				
		<link>http://marokariya.info/Artist-Profiles</link>

		<comments></comments>

		<pubDate>Wed, 01 Nov 2023 21:00:04 +0000</pubDate>

		<dc:creator>Maro Kariya</dc:creator>
		
		<category><![CDATA[]]></category>

		<guid isPermaLink="false">449589</guid>

		<description>Artist Profiles
	        View this post on Instagram            A post shared by Microtones (@microtones.wav)


	        View this post on Instagram            A post shared by Microtones (@microtones.wav)


	        View this post on Instagram            A post shared by Microtones (@microtones.wav)




	        View this post on Instagram            A post shared by Microtones (@microtones.wav)


	        View this post on Instagram            A post shared by Microtones (@microtones.wav)


	        View this post on Instagram            A post shared by Microtones (@microtones.wav)



	        View this post on Instagram            A post shared by Microtones (@microtones.wav)


	        View this post on Instagram            A post shared by Microtones (@microtones.wav)


	        View this post on Instagram            A post shared by Microtones (@microtones.wav)



	        View this post on Instagram            A post shared by Microtones (@microtones.wav)


	
	

	
	
	

	
	
	

</description>
		
		<excerpt>Artist Profiles 	        View this post on Instagram            A post shared by Microtones (@microtones.wav)   	        View this post on Instagram            A...</excerpt>

		<!--<wfw:commentRss></wfw:commentRss>-->

	</item>
		
		
	<item>
		<title>Pieces of Sky</title>
				
		<link>http://marokariya.info/Pieces-of-Sky</link>

		<comments></comments>

		<pubDate>Mon, 11 Sep 2023 17:21:56 +0000</pubDate>

		<dc:creator>Maro Kariya</dc:creator>
		
		<category><![CDATA[]]></category>

		<guid isPermaLink="false">447768</guid>

		<description></description>
		
		<excerpt></excerpt>

		<!--<wfw:commentRss></wfw:commentRss>-->

	</item>
		
		
	<item>
		<title>Organizing</title>
				
		<link>http://marokariya.info/Organizing</link>

		<comments></comments>

		<pubDate>Thu, 31 Aug 2023 14:19:54 +0000</pubDate>

		<dc:creator>Maro Kariya</dc:creator>
		
		<category><![CDATA[]]></category>

		<guid isPermaLink="false">447343</guid>

		<description></description>
		
		<excerpt></excerpt>

		<!--<wfw:commentRss></wfw:commentRss>-->

	</item>
		
		
	<item>
		<title>Music Productions</title>
				
		<link>http://marokariya.info/Music-Productions</link>

		<comments></comments>

		<pubDate>Thu, 31 Aug 2023 14:19:09 +0000</pubDate>

		<dc:creator>Maro Kariya</dc:creator>
		
		<category><![CDATA[]]></category>

		<guid isPermaLink="false">447342</guid>

		<description>


back to home

	

Penglai (2022)Oscar nominated short filmWritten and directed by Momo Wang




Audio special fx and production support for Chad CannonProduced by Chris Meledandri and Gail HarrisonNarrated by Scarlett Johansson


	

Coral is Calling (2023)

Perry Institute for Marine ScienceReef Rescue Network&#60;img width="3840" height="2160" width_o="3840" height_o="2160" src_o="https://cortex.persona.co/t/original/i/c14b1b5db582794b76f3cc3d4c8590531e329fceb8a7fc86f5fc5759201ff894/vlcsnap-2023-09-11-14h01m41s770.png" data-mid="1315028" border="0" data-scale="100"/&#62;Music and SFX for ‘Coral is Calling’ and Experience Videos - a PSA for restoration of coral reefs.
Directed by Eve Frohm
Filmed by Harry Lee
	Psych. Spotlight (2021-2022)
intro / outro music for podcastpodcast editing
sample

</description>
		
		<excerpt>back to home  	  Penglai (2022)Oscar nominated short filmWritten and directed by Momo Wang     Audio special fx and production support for Chad CannonProduced by...</excerpt>

		<!--<wfw:commentRss></wfw:commentRss>-->

	</item>
		
		
	<item>
		<title>Tutorials</title>
				
		<link>http://marokariya.info/Tutorials</link>

		<comments></comments>

		<pubDate>Thu, 31 Aug 2023 13:28:53 +0000</pubDate>

		<dc:creator>Maro Kariya</dc:creator>
		
		<category><![CDATA[]]></category>

		<guid isPermaLink="false">447338</guid>

		<description>Tutorials
TouchDesigner

	




	




	







	





	





	







Ableton
	

	
	


</description>
		
		<excerpt>Tutorials TouchDesigner  	     	     	        	      	      	        Ableton</excerpt>

		<!--<wfw:commentRss></wfw:commentRss>-->

	</item>
		
		
	<item>
		<title>Event Promos</title>
				
		<link>http://marokariya.info/Event-Promos</link>

		<comments></comments>

		<pubDate>Wed, 10 May 2023 04:05:03 +0000</pubDate>

		<dc:creator>Maro Kariya</dc:creator>
		
		<category><![CDATA[]]></category>

		<guid isPermaLink="false">442553</guid>

		<description>Selected Promos





	



	        View this post on Instagram            A post shared by 🐸OTODOJO🐸 (@otodojo)



	


	


	





	        View this post on Instagram            A post shared by 🐸OTODOJO🐸 (@otodojo) 



	


	



	


	


	



	


	


	



	
	
	

	
	
	

	
	
	


</description>
		
		<excerpt>Selected Promos      	    	        View this post on Instagram            A post shared by 🐸OTODOJO🐸 (@otodojo)    	   	   	      	        View this post on...</excerpt>

		<!--<wfw:commentRss></wfw:commentRss>-->

	</item>
		
		
	<item>
		<title>waveseed</title>
				
		<link>http://marokariya.info/waveseed</link>

		<comments></comments>

		<pubDate>Wed, 10 May 2023 03:46:11 +0000</pubDate>

		<dc:creator>Maro Kariya</dc:creator>
		
		<category><![CDATA[]]></category>

		<guid isPermaLink="false">442552</guid>

		<description>
	

Chung -가야금 + FM 
[V]
pReconnect - Voices in the Rain 
[A/V]
impakt - bird sounds 
[V]

にこやかな蛙&#38;nbsp; - 呼吸法&#38;nbsp; 
[A/V]




	

sergio cote barco - noise(s) in prime rhythms (hearts n' rains) [V]


Tenkai Kariya - Desert 5 
[V]
Detroit Bureau of Sound - Mayasura 
[V]
otodojo - uproot and replant - an ode to forests 
[A]


return home

</description>
		
		<excerpt>Chung -가야금 + FM  [V] pReconnect - Voices in the Rain  [A/V] impakt - bird sounds  [V]  にこやかな蛙&#38;nbsp; - 呼吸法&#38;nbsp;  [A/V]     	...</excerpt>

		<!--<wfw:commentRss></wfw:commentRss>-->

	</item>
		
	</channel>
</rss>