Dynamic Confluence

collab.s
JIeliang Luo, Mert Toka
lab.s
Experimental Visualization Lab
media
Images, sound

A variable dimension, dynamic visualization system with real-time stereo spatialized audio generation that functions on its own or else can be activated by motion-captured data from viewers observing the projection. The animation consists of a collection of images (in this video 36 black and white photographs taken in the Yucatan in 1980) that are positioned within a 3D virtual space based on Voronoi tessellation by which an aesthetic configuration is achieved. Once situated in this dimensional space, each image begins to move around based on a set of varying behavior parameters. The overall structure aims to remain the same even though variations in gravitational pull allow each of the image panels to go beyond their constraints.

Individual sounds are triggered when images shift their positions, and the configuration of the 63 sound samples dynamically combine to generate an evolving texture of sonic elements of various density, inspired by the composer Iannis Xenakis’ compositions “Orient-Occident” (1960) https://www.youtube.com/watch?v=-IIprq9p498 and “Bohor” (1962): https://www.youtube.com/watch?v=DODVNHukY0I

--

The custom software was developed in two phases. The initial studies for positioning a set of images in virtual three-dimensional environments implementing the Voronoi algorithm, evolved out of the 2013 National Science Foundation sponsored “Swarm Vision” research project: https://vimeo.com/85000265 A series titled “Anamorph-Voronoi”, was developed in collaboration with researcher Jieliang (Rodger) Luo begun around 2016. The software has been used by the Studio to produce works-on-paper and lenticular panels.

The behavior modeling and interactions of image planes in a dynamic setting based on inorganic particle modeling, and organic group behavior was developed in the spring of 2020 in collaboration with Media Arts & Technology Ph.D student Mert Toka. A number of unique features have been introduced such as variable dynamic audio presence based on the location of the images within the virtual 3D space, and the activation of sounds according to the location of the orbiting images as they define their space while maintaining group coherence.