Emerging Sound

Been reading more about agent modeling to get a clearer idea of a project. Found the following interesting paper, entitled Sync or Swarm: Musical Improvisation and the Complex Dynamics of Group Creativity by David Borgo.

Here’s a very good quote summarizing the idea behind this:

“Can exploring and thinking about SI (swarm intelligence) affect the way we make and think about music? It remains difficult for many people to envision complex systems organizing without a leader since we are often predisposed to think in terms of central control and hierarchical command. The notion that music can be organized in complex ways without a composer or conductor still leaves many scratching their heads in doubt.”

The core behind the idea comes, again, from emergent behavior. That is, a collection of agents where each follows fairly simple rules, but creating an overall structure despite there being no clear leader of the swarm. A music example of this, listed in that paper, is John Steven’s “Click Piece” wherein a bunch of performers were given the only instruction to “play the shortest note possible.” Eventually, he would find when doing this experiment with many groups that they could create music together.

They cite four basic things needed to create swarm intelligence: 1) forms of positive feedback, 2) forms of negative feedback, 3) a degree of randomness or error, and finally 4) multiple interactions of multiple entities. In the form of the ant colony model I attempted earlier, each ant had positive feedback when finding the pheromones it needed to follow, negative feedback if following the wrong pheromones, randomness in movement, and many interactions from pheromones.

There’s another great quote in here from improvisational jazz bassist William Parker: “Creative Music is any music that procreates itself as it is being played to ignite into a living entity that is bigger than the composer and player.” He was talking about the power of improvisation in 1995, but it’s not hard to see how that could translate to modern technology easily.

Initial Experiments with Agent Modeling

So conveniently back in high school one of my projects was to make an agent modeling system of this thing called “Sugarscape.” (https://en.wikipedia.org/wiki/Sugarscape) I figured it would be easy to add some sounds to it as a proof-of-concept type thing. I was very wrong.

Here’s an example image of what the system looks like:

Displaying sugarscape_agents.png

 

For whatever reason in high school I decided to implement using ActionScript in Flash. Turns out Flash’s sound generating system is very difficult to work with. You have to manually tell it every frequency to play in a sin wave, meaning you have to actually write an algorithm that loops through numbers and takes the sin of their values, spitting the data out to a log for it to read. It was much more difficult than I anticipated and took way more time than I wanted for what was intended to be a quick demo of how something might work.

I ended up getting it to simply pick a random agent and then, depending on the resources that agent holds, play a sound each step. Higher frequencies indicated more resources and lower less. There’s still a number of bugs that it didn’t seem worth solving. If I ever wanted to do anything with Sugarscape I’d likely end up rewriting it using a different language.

Generative Music

So looking into the concept of generative music and agent-based/emergent generative music, I found that Wikipedia has a great list here of generative music software: https://en.wikipedia.org/wiki/Generative_music.

One there was something called the “Viral symphOny.” The idea was to express the spread of a disease through generated music. I found it difficult to actually relate what’s happening with the agents to the sounds being made in any way, though. That’s definitely something that I know I want for my project, for there to be a clear correlation between the agent modeling and the music produced.

https://vimeo.com/3908524    <—- interview with creator

This one’s called “Staggered Laboratories.” http://www.staggeredlaboratories.com/. Essentially it’s constantly generative music. Everything it plays is always new. He has a great blog on the site where he talks about his techniques. The tech itself uses something called the “Aleator,” which is a “MIDI generating VST plugin that is written in C#.” It’s a neat idea, though from what I can tell it’s defunct and is playing previously recorded tracks.

Back on emergent music, I found this site: http://www.emergentmusics.org/theory. I’m still trying to understand exactly how this works but basically he broke down the concept of music into it’s base parts: pitch, volume, timbre, etc. Then he used these somehow similarly to a grammar to build the music.

This is really cool and fun: http://www.synthtopia.com/content/2011/07/17/generative-sound-sequencer-for-ios-otomata/