The Art of MIDI Orchestration (Part 1)

The principles of orchestration have been presented in several classic texts by Kennan, Adler, Piston and others. In what ways do these principles apply when considering the virtual orchestra?

Just as an acoustic score’s realization will not be identical when played by two different orchestras, a MIDI-realized score also is interpreted by what hardware and software are used in the studio and what level of musicianship is brought to the production process. Though musicians use various hardware and software platforms to realize their ideas, the core issues are universal: How to achieve the most musically expressive score with the technology you have at your disposal. We’ll focus on the many details which help bring expressiveness and intention to our music.

While there have been many musicians who consider MIDI as a mock-up for what is meant to be performed by a live ensemble, this perspective often means that while sequencing a composition many shortcuts are made and many decisions that ought to be made are not. For those composers who are convinced of the expressiveness of MIDI as an artistic medium in its own right, this article will address some of those techniques.

Like any medium, MIDI has its strengths and limitations. In the acoustic world, much of what we accept as part of the musical experience involves many sounds that are not really musical at all; fingernail noises against strings and the sound of breath and mouth clicks for example. These non-musical artifacts are so deeply accepted in our musical culture that we simply ignore them and focus on the music itself. But when a new medium arrives we become very critical and sense shortcomings very quickly. This makes it all the more important to understand how to infuse MIDI instruments with musicality, expression, gesture and intention. It means understanding your sounds and samples and exploiting all of the parameters that can lead to deeper expression. Very satisfying musical results are quite possible with MIDI, and the situation is improving with every new generation of hardware, software and the ongoing evolution of sample libraries.

Orchestration styles change. The orchestrations of Stravinsky are very different from that of Mozart’s, as are Copland’s from Mahler’s. Since the virtual orchestra defines a medium, but not a musical style or genre, this divergence of approach to orchestration remains true in the virtual world as well. We’ve seen music concrete, sound design, electronica and the virtual orchestra evolve from electronic music and we will continue to see new genres and styles find a home with this new medium. For those composers interested in taking the principles of orchestration and applying them to MIDI, the concepts of orchestral balance, blend, transparency and orchestral weight still make sense, we must still be concerned with primary, secondary and tertiary materials, and knowing how to score a good tutti is useful. We will return to these concepts later in the article.

It is sometimes difficult to separate orchestration from composition. Many of the timbre choices an orchestrator makes has to do with planning how the piece’s structure evolves, and orchestral textures are often employed to contribute to the form of the composition. I find Walter Piston’s idea of the seven textural types very useful. Please see his book, on orchestration for a complete explanation and examples.

  • Orchestral Unison
  • Melody & Accompaniment
  • Secondary melody
  • Part Writing
  • Contrapuntal Texture
  • Chords
  • Complex Texture

In the electronic orchestra, even just one synthesizer timbre can be a complex texture in and of itself, with multiple amplitude and filter envelopes, dynamic panning and modulation of harmonics synchronized to tempo. This is new territory and the point where classical orchestration is not going to be of much help. The virtual orchestra gives us new options: We can use samples of acoustic-based instruments to orchestrate our music and/or we can use sounds that cannot be duplicated in the acoustic realm; these sounds are often complex, sometimes with non-whole integer harmonics and often with a built-in rhythmic pulsation produced with sample-and-hold, LFO or other devices. When using complex electronic sounds, listen to the harmonics and rhythmic patterns that are present. This can provide a hint as to how to proceed to integrate this timbre into an orchestral setting.

Designing the Ensemble

Designing the Ensemble

One of the great joys of MIDI is that it gives the musician the capability to pick and choose instruments that, in earlier times most likely would not have been heard together in an ensemble. In my Five Songs on the Poetry of Tu Fu (Ottava 02-006) I designed an ensemble using samples of Chinese percussion and other instruments from that region of the world, a flute from South America, an Irish harp, Western strings and sounds of water and wind.

By going through our synthesizer patches and sample libraries we create a specific ensemble for the needs of the piece. Later on, if an instrument needs to be deleted or added it’s possible without too much diversion from the creative process. The idea that the sole function of MIDI is to imitate the traditional classical orchestra can be put to rest when we look at the creative options. Mixing unusual combinations of timbres is one of the new benefits of the virtual orchestra and there is a whole lot more to explore in this medium because of it’s proven capacity to spawn new styles of orchestration and music. The key is to design an ensemble in which the instruments sound good together. As in so many aspects of artistic creativity, a particular element may be expressive and appropriate by itself, but in context it isn’t contributing to the whole. If the ensemble is chosen with care and sensitivity we are off to a good start as each timbre will play an integrated part in the composition.

The Micro Level of Sequencing

The Micro Level of Sequencing

It is impossible to discuss orchestration in the digital world without a brief discussion about sequencing. The digital orchestrator isn’t just assigning musical parts to instruments, but also defining how those instruments will be triggered (played) on the final recording. Though traditional orchestration often involves precise instruction as to how notes and phrases are to be played, with MIDI the manner in which notes are sequenced and connected to one another is a matter of supreme importance. If care is not taken, phrases will sound mechanical (the death of expressiveness) and choppy, and no amount of brilliant orchestrating can obscure this problem.

The six essential parameters of concern to the virtual orchestrator involve each note:

  • pitch
  • duration
  • timbre
  • envelope (primarily amplitude attack and release)
  • velocity
  • time (location relative to the beat)

To sequence expressive phrases, satisfying legato, fast runs and other gestures, it is often that one or more of these parameters needs attention. Attack and release times, note length and velocity play a crucial role in the sequencing of a fine legato line, and sometimes a very small adjustment of one of the parameters does the trick. Even a loud tutti will not cover up these intimate connections between notes. In a fast passage for example, select every other note (or whatever group of notes represent the weak pulse to you) and lower their velocity by 20% or so. This helps shape the line by adding some variation, and also can be used to articulate where the accented notes are.

In a slow, legato passage, let’s look at two half notes, we’ll call them 1 and 2. Note 1’s release time is one of the parameters that may need adjustment. The gate time, or actual length of note 1 is also tweaked. The goal is to get the attack of note 2 to become as neutral as possible, so that it sounds as though the moment the first note’s decay is done the second note begins, but with no increase in amplitude. What is happening is that note 1’s length is overlapping into the start of the note 2. Lengthening the 1st note by between 3-12% usually does the trick. It depends on the situation. By adjusting the velocity and attack of the 2nd note it is possible to sequence a smooth legato. Upon viewing a wave file of this connection, the wave file would appear with as little jump in amplitude as possible at the point where the 2nd note begins. As an adjunct to learning how to do this well, I highly recommend that the virtual orchestrator incorporates and works regularly with the human voice as an instrument in the mix, as there is much to be learned from the phrasing, dynamics and expressiveness of a fine and well-trained singer.

When sequencing brass, I prefer to use 3 individual trumpet patches to simulate a trumpet ensemble. The most obvious advantage is that you still have access to 3-part polyphony in the trumpets if so needed (if you use a brass ensemble in three-part writing you now have 9 instruments playing rather than 3!) In order to give those 3 trumpets some autonomy, I detune the left and right instruments (trumpets 2 & 3) by 20 cents or so, one higher and the other lower. I also move both trumpets 2 and 3 off the beat, one a bit advanced in time and one slightly late. Finally, trumpets 2 & 3 are panned hard left and hard right. This ensemble effect is varied by how much of these kinds of modulations you input.

Since a MIDI sequence is a performance of numerous instruments playing together, the virtual orchestrator is not only responsible for creating an effective orchestration but also must ensure that the sequence is rich with expression and detail. I cannot stress how important detailed sequencing is in regard to how the orchestration ultimately sounds. As in nearly all music production issues, when a problem is corrected earlier in the production process rather than later, the overall success of the final recording will be easier to achieve. Exceptions to this occur when flexibility is necessary. For example, not using EQ on the music until it is a wave file and non-destructive processing can be applied. This allows you to always go back to the midi file recorded with no EQ and the unprocessed wave files with the option of non-destructive processing. On the other hand, if you know the timpani is too boomy its best to deal with an isolated problem early as the balancing of the sound becomes more problematic if these individual cases of imbalance are not fixed in the orchestration and/or the mix. Why wait to EQ the entire mix and risk affecting an element you don’t want changed? In this case EQ just that one instrument and save yourself trouble later on.

No matter how skillful your orchestration, if dynamics, tempi, or program changes are left static there is only so much that orchestration can accomplish in the MIDI ensemble. Masking an element that doesn’t sound very good alone always hurts the music. The best approach is to isolate that element and make it work. If you want the whole to be greater than the sum of its parts I don’t know of any shortcut around this problem.

Learn anything? Please share!