Skip to content

Visiones Sonoras Festival 2013

I spent this last week at the Visiones Sonoras Festival in Morelia, Michoacán, MX. The festival is hosted by CMMAS and featured 5 days of lectures/conferences, concerts and workshops with over 40 invited composers/lecturers/performers as well as over 40 guests on scholarship and took place at the UNAM campus in Morelia. The festival is largely a gathering of Latin American electro-acoustic composers, however there were composers, performers and lecturers from Japan and North America as well.

View of Morelia from the hotel

View of Morelia from the hotel

The festival had many interesting talks, exciting concerts and great discussions and I will try to outline some of the highlights that I found to be appealing. However I must give the disclaimer that almost the entire festival was conducted in Spanish and my command of the language is not perfect at this point. Therefore I will discuss the things that I think I understood.

Rear-view of the lecture hall

Rear-view of the lecture hall

On the first day there was  a colloquium and world premiere of a piece by a group of composers: Otto Castro, Adina Izarra, Fabián Luna, Miguel Noya, Jaime Oliver, Daniel Schachter, Rodrigo Sigal, Felipe Londoño and Luis Germán Rodríguez. The piece, “El Sutil Sonido de las Plumas” or “The Subtle Sound of the Flowers” was a collaborative work by these composers. While all I understood at the colloquium was the word pájaro, the premiere of the piece clarified the frequency of this word because bird (pájaro) songs were the main source of material.

Coloquim by collaborative composers

Coloquim by collaborative composers

The second day featured a very engaging lecture by Ricardo Dal Farra about electro-acoustic music and the role of artists in environmental issues. He talked about the Balance-Unbalance International Conference which generated lively discussion (although for me that mostly went in one and and out the other). At that evening’s concert I was particularly fond of a piece for violin and electronics by Felipe Pérez Santiago titled “Post War“.

On the right is the auditorium, the left where we eat our meals

On the right is the auditorium, the left where we eat our meals

The third day featured a lot of great talks and discussions. Felipe Pérez presented his album Mantis which includes electro-acoustic music not afraid of phat beats, it is definitely worth a listen. Adina Izzara talked about the history of bird songs in music composition and their use in electro-acoustic music.

Brenda Brown continued with the bird theme when she presented an amazing project to revitalize the soundscape of TzinTzunTzan. Professor Brown has carried out a number of projects combining sculpture, landscape architecture and soundscape. These have included the Crowley Listening Trail at Crowley Nature Center in Sarasota, Florida and the MacDowell Listening Trails at the MacDowell Colony in Peterborough, New Hampshire among others. In combination with restoration ecologist Roberto Lindig-Cisneros, Brown has designed a landscape of native Michoacán plants to attract hummingbirds to the TzinTzunTzan ruins. TzinTzunTzan is named from the P’urhépecha word meaning “place of the humming birds”. Humming birds have been a symbol of the region for a long time, both during the Mesoamerican rituals at the site and more current propaganda in the region. I hope to visit the site myself in the next few weeks and see the impact this project has had on the environment, both biologically and sonically.

Felipe Peréz talks about Mantis

Felipe Peréz talks about Mantis

The concert on the third night had two pieces that stood out in my mind. The first being “9 Gardens” by Jaime Oliver and “Primero encuentros con la vida y con la muerte (Leopoldo Muñoz)” by José Miguel Candela. Dr. Oliver’s piece features use of the MANO controller, an open-source controller of his design relying on hand recognition via a single camera.

Candela’s piece was an electro-acoustic work consisting of a testimony given by a survivor of the 1973 Chilean coup d’etat. I was spared the gory details because of not understanding the language, but words were not necessary to feel the weight of this powerful piece. Candela told me afterwards that this is one of four pieces he is making based on testimonials of the coup in remembrance of the human rights violations 40 years ago.

Jaime Oliver discussing mapping and instrument design

Jaime Oliver discussing mapping and instrument design

The next day Jaime Oliver lectured on the function of his controller and advanced mapping strategies he uses to achieve organic control. That afternoon Paul Rudy talked about his composition as a sacred practice. As a formally trained composer he rejects many of the ways he was taught to think about music. Instead, he relies on intuition, spiritual practices, deep listening, sound healing and improvisation to realize his works. Professor Rudy also led a workshop on sound healing throughout the festival, more on this later.

Paul Rudy discussing composition as a spiritual practice

Paul Rudy discussing composition as a spiritual practice

That night I particularly enjoyed a piece by Venezuelan composer Miguel Noya titled Cryptochrome. The piece was for theorbo and electronics and was performed by a Venezualan instrumentalist Rubén Riera. Rubén’s wife, Adina Izarra, also had a piece, “Primer Zibaldone” that he and she performed together.

Another view of the beautiful UNAM campus

Another view of the beautiful UNAM campus

Saturday featured a talk on Technology in the Post-human/in-human world by José Miguel Candela. He showed us one of his works for dancers and electronics as well as discussing different interfaces, biological sensors and theories about humans and technology in the future.

The most pivotal point of the conferences for me was a demonstration by Paul Rudy and his workshop participants in their sound healing practices. The entire audience participated by closing our eyes, breathing, listening and vocalizing. This put me in the right mindset for enjoying that evening’s performance, especially Rodrigo Sigal’s piece Repetition of Perception in addition to Paul Rudy’s “Sun’s soliloquy and Death’s thin melody”, Yuki Harada’s audio/visual piece “Reimage” and a virtuosic performance of Hebert Vázquez’s “Ángel del abismo” by guitarist Norio Sato.

Before the sound healing demonstration

Before the sound healing demonstration

Setting up for the party and final performance on Saturday afternoon

Setting up for the party and final performance on Saturday afternoon

After the concert we all headed back to the hotel for a final party and performance. Miguel Noya performed his own pieces along with video projections and theobo playing by Rubén Riera. Jorge Variego, a composer and excellent clarinetist (who performed a piece every night) accompanied by his generative free jazz SuperCollider patch, had an awesome dueling bass clarinet improvisation with the clarinetist from Ensamble Noodus. After that, I performed my audio/visual work Sobras Rechonchas (Chubby Leftovers), which was misspelled in the program as Sombras Rechonchas (Chubby Shadows). This is perhaps an improvement of the title. Finally, the party actually started when DJ Miguel Ángel Valle dropped the beat: everyone began dancing and downing mezcal.

The only non-blurry shot I took before the mezcal and dancing took over

The only non-blurry shot I took before the mezcal and dancing took over

After an amazing week I am extremely pleased and honored to have participated in this festival. I met many awesome people from all over the Americas and am very excited to use this energy during my residency at CMMAS this month (more on that later). I got to know a lot of young artists, mostly from Mexico City, and am hoping to get a chance to travel there and collaborate with them on various projects, performances and workshops. I’d like to talk about all of them too, and hopefully I will after I get more involved in their world.

There were many more experiences I’d like to share and many other pieces I did not talk about. I’d like to talk about them all but for the majority of it you just needed to be there. I recommend composers and sound artists to apply for next years festival, and to anyone near the area to come and check it out (all lectures and concerts are free and open to the public). Thanks to Rodrigo Sigal and the people at CMMAS for putting this event on and inviting me to attend, I cannot express my pleasure enough.

Musical Chairs

A while ago I thought of adapting the game musical chairs for music ensembles. The rules are basically the same as the traditional game, except instead of sitting in a chair when the music stops you play the chair. This would perhaps be best done using various percussion implements like mallets, sticks and Suzuki bows. The other adaption is that once you are “out” you improvise along with the music while the remaining players walk around the chairs.

When given the opportunity to work with an ensemble I usually have other pieces I’d rather have them work on. That coupled with an interest in simulating game pieces in Chuck and Processing I decided to make an agent-based model of this game. A video of the result can be found below:

I tried various methods for determining which player gets to the chair first and settled on randomly choosing them because it was the simplest to implement and made little difference to the audio/visual result when compared with other methods.

“no-input output” album release

The Contemporary Music Lab (CML) at Dartmouth released an album of improvisations and compositions titled no-input output last Spring. Nathan Davis directs the ensemble and produced the album while Carlos Dominguez played percussion, Jianyu Fan performed on a no-input mixer, Ryan Maguire played the RM1x sequence remixer, Dave Rector played cello/viola de gamba and I played guitar/electronics and contributed the composition and mixing of the final track.

The album was reviewed in the SEAMUS newsletter by Tom Dempster. While he erroneously cites Ryan Maguire as co-director of the ensemble, he does have some nice things to say: “there are a lot of wonderful moments in the album and some lashes of vivid, mesmerizing colors amid some striking and gorgeous textures. “tomorrow” enshrines and distills all of the concepts and spaces of the album to a greatly rewarding under eight minute jaunt…” The reviewer seemed particularly impressed with Jianyu’s track “tomorrow” and complements it at length. Thanks to Tom for the detailed listening, thorough review and thoughtful comments.

More performances by CML can be found on their YouTube Channel.

Euclidean Rhythm Generator

Godfried Toussaint wrote a paper in 2005 “The Euclidean algorithm generates traditional music rhythms” which describes the application of Euclid’s algorithm, and more usefully Bjoklund’s algorithm, to musical rhythms. The Euclidean algorithm is a method for computing the greatest common divisor of two given integers while Bjorklund’s algorithm is a method for evenly distributing a number of pulses in a finite amount of time steps. The details of Bjorklund’s algorithm can be found in his paper “The Theory of Rep-Rate Pattern Generation in the SNS Timing System

I have implemented Bjoklund’s algorithm in a Max/MSP external so one can easily generate Euclidean rhythms of any length and with any amount of pulses. This can be useful for automatically generating rhythms which are complex enough to be interesting while still fitting into typical meters of traditional music.

Here is the GitHub repository for the Euclidean Rhythm Generator Max/MSP object. The GitHub site also has an example Max patch which demonstrates the objects control and function.

Feel free to use this object in your own projects and if you do please share it with me!

Operating System Sonification at CMMR2013

Andy Sarroff, a colleague of mine, just presented a project of ours at the 10th International Symposium on Computer Music Multidisciplinary Research in Marseille, France.

The presentation was about a system for sonifying Unix-like operating systems. Sonification is achieved using a Python-based message dispatcher which receives OS events from DTrace. In the demo events are sonified using ChucK, however any synthesis engine could substitute. The code is open-sourced and available on Github. I encourage interested parties to download the source, modify to their own preferences and share. For more details on the work, the proceedings paper may be found here.

SOS Model

Thanks to Andy for his hard work in creating and presenting this project. Here is Andy’s poster.

Reference:

Sarroff, A.M. Hermans, P., and Bratus, S. SOS: Sonify Your Operating System. In Proceedings of the 10th International Symposium on Computer Music Multidisciplinary Research (CMMR 2013), Marseilles, France. October, 2013.

BibTeX:

  @PROCEEDINGS{sarroff-hermans-2013,
  title = {SOS: Sonify Your Operating System},
  booktitle = {Proc.\ 10th International Symposium on Computer Music Multidisciplinary Research},
  author = {Sarroff, Andrew and Hermans, Phillip and Bratus, Sergey}
  year = {2013}
}

Tells Tall Tales album release

My album Tells Tall Tales is now on bandcamp and all 7 tracks can be downloaded for free. The album was written and recorded on 2 different continents, over a period of about 3 years, and in 4 different studios. I did everything myself with the exception of the following people I’d like to thank:

Andy Sarroff for mixing track 3, Scott Fader for engineering track 2 and Jake Yemma for whistling one of the harmonies (the left-channel, I believe) on track 6.

Please distribute this album far and wide, any feedback is much appreciated.

Chubby Leftovers

Here’s a video made using Processing and ChucK communicating via OSC. Originally I worked on this software for a performance at the Subtropics festival with Chübsteppe (CHaotic ÜBerSynasthetic Tele-Electro-acoustic Pop Performance Enclave), a laptop trio including Carlos Dominguez, Ryan Maguire and myself. The video engine is made in Processing and is controlled via OSC messages. I use ChucK for sound synthesis and for sending OSC messages to Processing to generate video that corresponds to the audio. In an ensemble setting we have the video engine on a server to which we all send messages. Each user controls one or more groups of rectangles that all appear on the same screen. By altering a variety of parameters one can modify properties of the rectangles and animate them. I’ve developed a number of generative audio systems in ChucK with corresponding animations, the video briefly surveys some of them.

Follow

Get every new post delivered to your Inbox.