"You need to solve all the problems at the same time if the problems are all interconnected." - monz / m-onzabout blog visuals art music
4th Feb 2025
This is a multi-genre algorithmic audio visual performance built with pure data & GEM and a custom live coding and pattern engine using the pdjs object and pdsend to receive UDP messages from an external node js repl and scripts. There are visuals created using a hallucinated generative A.I process and sounds created using an audioLLM via replicate.com to create one shot sounds that are sequenced algorithmically. GEM is used to play pre-existing video's, control 3D objects and process pixel effects.
There are no long static generative audio artifacts played "as is" and everything heard is algorithmically controlled by the patterns. This is a hybrid algorithmic and generative A.I approach that is fully open source on the m-onz github (mixtape & pdav projects). More info can be found at https://m-onz.net and more documentation and approachable open source examples will follow for those interesting in emulating some or all of these ideas.
m-onz is available to perform audio visuals or just visuals at your event or to co-organise events under the fake[dac~] brand or with other event concepts.
Performance: Stephen Monslow (m-onz).
Video recording: Rob Hall.
Thanks to Luis Sanz for his comments, feedback and support.