So looking into the concept of generative music and agent-based/emergent generative music, I found that Wikipedia has a great list here of generative music software: https://en.wikipedia.org/wiki/Generative_music.

One there was something called the “Viral symphOny.” The idea was to express the spread of a disease through generated music. I found it difficult to actually relate what’s happening with the agents to the sounds being made in any way, though. That’s definitely something that I know I want for my project, for there to be a clear correlation between the agent modeling and the music produced.

https://vimeo.com/3908524    <—- interview with creator

This one’s called “Staggered Laboratories.” http://www.staggeredlaboratories.com/. Essentially it’s constantly generative music. Everything it plays is always new. He has a great blog on the site where he talks about his techniques. The tech itself uses something called the “Aleator,” which is a “MIDI generating VST plugin that is written in C#.” It’s a neat idea, though from what I can tell it’s defunct and is playing previously recorded tracks.

Back on emergent music, I found this site: http://www.emergentmusics.org/theory. I’m still trying to understand exactly how this works but basically he broke down the concept of music into it’s base parts: pitch, volume, timbre, etc. Then he used these somehow similarly to a grammar to build the music.

This is really cool and fun: http://www.synthtopia.com/content/2011/07/17/generative-sound-sequencer-for-ios-otomata/