MIDI for Dummies

I need to make at least some of my own sounds, and organize these sounds in my way. I have no idea how to have any sort of control over GMIs, soundfonts and soundbanks. So I end up -for instance- with a song in my head but having to go through 100s of preset sounds, or making something based around what soundfonts or samples I can scrounge out of some chaotic placements.

Then I tried to tweak some wav files and copy paste to reaper / change pitch.

I am extremely limited at this time.
 
1. Introduction
For some reason this post was made a sticky. I'm going to try and make this thread more useful. There are three basic ways to communicate MIDI to and from your computer:

1. Firewire (aka 1394)
2. USB
3. Internal sound card with a MIDI (DB-15) connector

I'm going to focus on the internal sound card because that is where the interface began. Admittedly, this is legacy and the other two paths are currently the state of the art but the PC card was never brought to its full potential in my opinion. Maybe we can change that in this thread. All MIDI requires rendering which roughly means making sounds from the MIDI text descriptions. Most sound cards have chips which render MIDI and some have the additional capability of direct memory access (DMA). DMA allows very low latency which is arguably more important than bandwidth with MIDI because it's so size efficient. Let's have a look at interface bandwidth:

ISA (16MB/s)
PCI (133MB/s theoretical ~60MB/s real)
USB (1.5MB/s)
USB 2.0 (60MB/s theoretical ~45MB/s real)
USB 3.x (approaching/surpassing GB/s theoretical)

What you will note is that even the lowly 1st generation USB is capable of single channel audio I/O. The connections of the interfaces are described below:

PC card (PCI or ISA) uses a D15 pin male connection to various DIN type MIDI male ends ~$50
USB uses a standard 4 conductor cable with various ends ~$10

In the next installment, I will attempt to explain MIDI rendering and where it takes place.

2. Rendering
In this segment we will briefly go over the rendering or fleshing out of MIDI text. The Audiofanzine link in the first post gives some good analogies of MIDI to actual sound so if you don't understand the basic concept of MIDI you should read that first. Rendering is done by a processor, usually a dedicated one but not necessarily. As chip integration continues marching forward we find these synthesizing functions as part of other multi-function chips. The only way to know the capability of your particular chip is to become familiar with its specs. This is not always as simple as is should be. Once you know your chips and the synth functions they support, the next thing to do is know where they are. This is important because where they are will effect how useful they will be for a particular function. In the part above this one, I highlighted the common MIDI recording interfaces and their bandwidth. What I did not include was the processing latency. This parameter is critical in certain situations and dependent on both your particular hardware and software configuration along with the limitations of each. For example, an ISA card that cannot do DMA on a hardware level may have the necessary bandwidth for 8/8 channels but will be severly limited because of its abysmal latency. Conversely, an ISA card may have excellent DMA hardware but poor driver software preventing it from performing optimally. Drivers are a concern for all interfaces both internal and external.

Okay, so where is your synth rendering done? If you have an internal sound card, it will be on that card. It can also be part of the supporting chipset on your motherboard but not typically. Even if the chipset has the hardware it's not usually supported by a driver to make it functional. Nowadays, the rendering is done in an external box connected to the PC with a cable. First generation USB did not have DMA and it was only partially implemented in the second generation. It really depends on how it was done. Firewire is DMA capable but again, it depends on how it was implemented on your hardware and driver. Rendering DMA can reduce your latency by a factor of 10x but this may or may not be important in your application. If you are doing live monitoring of your recording, it's critical. The complete MIDI command set describes everything about the sound the same way a dictionary describes everything about a book. You make art by choosing the right words in the right sequence.

MIDI can be decoded (rendered) and encoded (transcribed). Let's say you have a synthesizing instrument like a MIDI keyboard with an on board processor similar to the one on your computer. That keyboard is issuing MIDI commands which are rendered by the processor to analog wave forms. These can be amplified, stored directly or re-encoded back to MIDI by another processor. Conversely, the MIDI input you create as you play can be passed directly to or combined with other MIDI files at another location. All that is required is an identical dictionary and compatible storage/transfer. When the unaltered MIDI is stored as MIDI and rendered again, it plays back exactly as it was originally produced. Now, let's say you are recording an analog instrument like a clarinet into a microphone. Here again the wave forms are encoded into MIDI text. How much of the recording was captured and how well the encoder can describe that capture will dictate how closely the recording will match when it is rendered again. Things become interesting when you start to edit these MIDI files. You can transform the original instruments to mimic others but for this to work with expected results you need good virtual instrument descriptions and we'll talk about those next.

3. MIDI Instruments
We need to make a distinction here between MIDI physical and virtual instruments. Neither produce analog sound signals directly. You should be familiar with MIDI keyboards. These are virtual instruments because at their heart is a MIDI computer and clock that processes the tactile input which then creates the MIDI. You may have seen other digital instruments that work on the same principle. What distinguishes these from the purely software virtual instruments is the tactility. You can achieve the same sound with either if the MIDI logic is indentical. These purely software instruments reside in processor memory. The memory can be on a sound card chip or a file on your hard drive. These files can be in many formats and some are exclusive to specific manufacturers so be aware of this when you commit to one for your use. You may be in love with a particular soundfont but discover the inconvenience of this format with the rest of your workflow. MIDI instruments contain the waveform signatures that flesh out the other aspects of the sound beside pitch, duration, inflection and amplitude. The great thing about using these is the ability to change the instrument in an arrangement. Is your string arrangement too soppy? Try some clarinets. Everything about the recording will stay the same except for the sound. This is amazingly powerful and probably the best reason to compose in MIDI. I'm not going to go into more detail here because the field is still developing. Your best bet is to remain active on boards like this to exchange your experiences with others to form opinions of what will work for you.
 
Last edited:
Back
Top