Final presentation for the “Improvagent” here: Final Presentation
Added dynamics in volume variation, now the number of agents per quadrant sets how loud a note is played.
With that I’m done for the semester. There’s a lot of improvements that could be made, namely:
- Agents all die out fairly easily
- Still percussive, no note length variation or pitch variation
- Only one sound
- Not visibly intuitive
I only had a vague idea of “agent-driven generative music” at the start, which evolved into making a computer modeled version of the MahaDeviBot. While there’s still a lot of things I wish I had done, overall I’m happy with the project.
Finally gave something for agents to do. Now, there are many more “sources” (the green squares) in the environment, which have a random resource value from 0 to 100 (represented as white closer to 0 and greener closer to 100). Agents will move to a source if they see one, and then they take 10 resources from the source. When a source reaches 0, it dies out and disappears. Agents also consume 1 resource each turn, and if they reach 0 they die. In addition, agents can reproduce and make a new agent once they reach 50 resources.
The other major change is how interaction between boards works. Now, instead of adding agents it adds completely new sources and increased resources to sources within the quadrant. Similarly, interaction can cause all the sources in a quadrant to reduce resources.
There is already some interesting structure forming from these changes. It’s easy to see how populations quickly form and die out. Main thing to work on next is adding more and different sounds.
I added new functionality by allowing agents to talk with each other. Now, after a set number of turns, two random “players” will improvise together. One player kicks off the interaction and the other reacts. Right now, all that happens is if the player that kicks off the reaction has a quadrant value above a certain threshold and it is higher than the reacting player’s corresponding quadrant, agents are added to that quadrant in the reacting agent. Similarly, if the kicking off agent has a quadrant value below a certain threshold and it is lower than the reacting player, the reacting player loses agents in that quadrant. As expected, this causes all the players to end up being the same, as shown in the below picture. In this example, they all ended up playing on beats 1 and 3.
To fix this, I need to add the sociability and cooperativeness metrics I originally described.
I began outlining the model. The general concept I already had in place: each “instrument” would be comprised of an agent modeling system. Then, using data from each model, the instruments would listen and respond to each other.
First thing I wanted to determine was what variables I wanted pulled out of the model. I decided on the following: pitch, volume, reverb, and pan. More could be added later, but I figured this was a good start. The more important bit is the spread of notes. This is a concept I partially borrowed from the Mahadevibot. Essentially, each instrument has a percentage chance to play on each beat, which then is repeated each measure. (The number of beats per measure and tempo are assigned universally). The difficult thing would be to assign pitch, volume, etc. per each note, but that’s not going to be in initial attempt.
The variables assigned per player at the start are: Sociability and Cooperativeness. Sociability is the chance the player will interact with others. Cooperativeness, if high, will cause a player to attempt to fill the same beats as other agents. If low, will cause them to be more likely to fill empty space.
The important step, then, is building a model that works with these variables I want.
Link to my presentation on “Agent Modeling Music” from March 9: Agent Modeling Music