Why Can't Acoustic Instruments Be Perfectly Synthesized?

  • Thread starter Thread starter propman
  • Start date Start date
propman

propman

Active member
Frequencies are always the same, all the way across the board. C4 is a steady 261.625565 hertz tone and the only thing making C4 on one instrument sound different than C4 on another (aside from amplitude) is the layering of different overtones at different intensities. Therefore -- in theory -- it should be possible to make a completely synth grand piano that sounds exactly like the real thing. Obviously that hasn't happened and they almost all sound like crap. So, nerds, can you explain why that is?
 
Frequencies are always the same, all the way across the board. C4 is a steady 261.625565 hertz tone and the only thing making C4 on one instrument sound different than C4 on another (aside from amplitude) is the layering of different overtones at different intensities.

I'm not sure that this is true. Furthermore, I'd venture a guess that taking this info as fact is probably the reason old virtual instruments do kinda suck.
They're logical, mechanical and predicable; A real instrument, environment and performer are not.

Therefore -- in theory -- it should be possible to make a completely synth grand piano that sounds exactly like the real thing. Obviously that hasn't happened and they almost all sound like crap. So, nerds, can you explain why that is?

I dunno man. Somethings never seems to sound good when synthesised, but there are some damn good piano synths out there.
I use a digital piano exclusively and I'm sure plenty of people can tell, but I've never been criticised for it.
I also have a roland mu50 which is hilariously obviously fake.
 
I don't know if this was the direction you were going in, but it's damn tough to even record something and have it reproduced as near convincingly like the 'original.
 
Depending on the instrument, one (or quite possibly all) of the following complicate things:
- There are a lot of overtones.
- There are sounds other than overtones (percussive sounds and the like), at least with some instruments.
- Everything changes over time in the course of a single note, and quite quickly.
- Successive notes have different overtones, different other sounds, and everything changes in a different way, at different speeds and at different times than it did in prior notes.

Actually, there are some sample-based grand pianos which sound pretty good to my ear, if not exactly like the real thing. At the single-note level, a piano is actually relatively simple compared to, say, a woodwind or violin: the hammer always hits in the same place, and the strings aren't manipulated with anything other than the hammers and the damper (except in the case of some notably "modern" compositions, or possibly Jerry Lee Lewis). The complication in the piano comes from the fact the player is hitting lots of keys at a time and in rapid succession, and the strings aren't independent, isolated devices, but they're all lying in reasonably close proximite in a box, pressed against the same soundboard.
 
Oh yeah: and with the exception of certain instruments, C4 is not a a steady 261.625565 Hz from instrument to instrument or from note to note. Even aside from slurs and bends, players finesse pitch up and down all the time, and some instruments, by nature, are sharp at the attack. For that matter, the natural intonation of many instruments isn't equal, but some variant of just.
 
OK, OK . . . Maybe I was flawed in my delivery but when I said C4 is a steady 261.625565 hertz, I meant that literally. A perfect C4 is always 261.625565 hertz and assuming acoustic instruments could be that implausibly precise, the sound of any two given instruments would still sound different relative to each other. I wasn't assuming that every/any acoustic instrument was mathematically perfect in their tone generation.

With that cleared up (I hope), I'm wondering why -- with the prevalence of physics engines in video games and the like -- it isn't possible to create a virtual instrument that could push calculations (interaction between strings, the body of the instrument, the room, the mic, etc.) which produced sounds in the same way one would in the real world. I mean, I understand that something like that would be a slap in the face to purists (which I consider myself, to a point) but what I'm questioning is not ethics of such a venture but the viability (in case an ethics discussion comes up).
 
assuming acoustic instruments could be that implausibly precise, the sound of any two given instruments would still sound different relative to each other. I wasn't assuming that every/any acoustic instrument was mathematically perfect in their tone generation.

Yeah, ok. I take what you mean.
Just making it stand out because that's debate/derail material right there.

It's like opening a thread with "I know macs are better,but..." ;)

How familiar are you with current software?

The reason I ask is that I am not, but I've heard midi film scores that blew me away before.
I remember trying to find out what these guys are using but that's about all.
Was gigastudio one of them?

Protools has come bundled with xpand synth for years but 90% of patches suck. They always have.
In the right hands blah blah, but I always figured it wasn't the cutting edge of synth instruments.
 
That's modeling synthesis, which is different from sampling. I'm not up to speed on the latest developments, but it's the approach taken by the Yamaha VL70m. That's kind of an ancient bit of equipment, and it's been discontinued for awhile.

There's a Wikipedia entry on the general subject, though it's not super informative.
Physical modelling synthesis - Wikipedia, the free encyclopedia
 
Yup, acoustic instruments are not pure frequencies.

Nearly every acoustic instrument sounds different due to difference in materials and construction.

For example, when I went to buy a pro tenor sax a few years ago I auditioned a dozen of them and even horns
of the same make and manufacturer sounded 'different'. Pro players spend a lot of time and money finding the ones
that work for them.

If a concert violinist could pick up any violin and play the same they wouldn't spend million$ on a Stradivarius.
 
Put another way, I suspect it's not the lack of perfection that causes synthesised instruments to suck. It's the lack of imperfections.

Any real instrument is full of overtones and harmonics that make up the total sound. I imagine that these could be recreated in software (much as you can get analogue emulators for digital) but it wouldn't be easy or cheap.
 
With that cleared up (I hope), I'm wondering why -- with the prevalence of physics engines in video games and the like -- it isn't possible to create a virtual instrument that could push calculations (interaction between strings, the body of the instrument, the room, the mic, etc.) which produced sounds in the same way one would in the real world.

Money.

You could probably develop a synth instrument that would do incredibly complicated calculations and produce as complex of sounds as the real instrument, but you wouldn't be able to sell enough copies to make it worth it.
 
Frequencies are always the same, all the way across the board. C4 is a steady 261.625565 hertz tone and the only thing making C4 on one instrument sound different than C4 on another (aside from amplitude) is the layering of different overtones at different intensities. Therefore -- in theory -- it should be possible to make a completely synth grand piano that sounds exactly like the real thing. Obviously that hasn't happened and they almost all sound like crap. So, nerds, can you explain why that is?

Because the world is exceedingly complex and computers are still primitive?

I dunno, because pianos just sound the same to me. But then, synthesizers are a thing in their own right, which is the whole point of electronic synthesizers, I suppose. If I wanted a piano, I'd use a piano and if I wanted something weird, I'd use a synthesizer.
 
A thought, though hypothetical and academic (which is, I think, consistent with the nature of the OP's original post):

The power of a digital processor is defined in terms of speed. That is to say: my computer can do anything a supercomputer can do, it just does it much, much more slowly.

Realistic physical modeling for sound synthesis is incredibly demanding of processing power, if done in real time. So ... suppose you don't do it in real time? Take your computer, input the variables on a note-by-note basis for the violin part for, say, a 3-minute piece of music, than hit "render." Wait for 5 days. You've now got a wav file that is as realistic as can be produced with 1,000 times the processing power of your computer.

There are some practical issues, of course.
 
C4 is a steady 261.625565 hertz tone for maybe a fraction of a second and then things happen. That initial sound energy hits the frame of the piano, the other strings, the floor, walls etc. and by the time you perceive it, it has already undergone a lot of changes. Somehow all this affects its timbre.
That assumes you hear a real piano in a real space.
A synth's sound has to be delivered by way of a speaker which is inefficient by nature. So even if the electronics can produce the 261.625565 hertz tone, the delivery mechanism of the two are alien to each other.
Well that's the best I got. And I think about shit like this all the time :).
 
Back
Top