64-bit mix engine?

  • Thread starter Thread starter Lo-Fi Mike
  • Start date Start date
That "test" is absolute total bullcrap.

The same people are "selling" a system for $9500 that doesn't even exist yet (X38 motherboards are not out yet) and when it does exist will cost about $2500 to build.

*edit* I see today that someone called his bluff and he removed the link to buy as well as the information on the vaporware system called "elementx" lol

good catch!

But is there a benchmark anywhere that doesnt show vista getting utterly pwnt on audio issues?
 
No, noone has done that test because its contrived and completely un-duplicatable.

I HAVE done REAL tests with Vista and XP and Vista falls on its face performance wise, especially at low latency. So has this guy.... and HIS tests can be duplicated and verified by anyone, unlike Rains marketing bs.
http://www.aavimt.com.au/dawbench/blofelds-xp-v-vista.htm
 
Well I sure won't be leaping into Vista any time soon
 
64-bit floating point is theoretically more resistant to clipping than 32-bit,

Well, sort of if you're mixing more than 3.402823466 e38 tracks together. IN the real world 64 bit floats will be more resilient to rounding errors.

Floats are all about precision, not limits. Large numbers are described by larger units, smaller numbers by smaller units, as apposed to fixed pooint numbers which have a unit of 1. So a float may say "I've got 16324 152s here" or "I've got 16324 .52324152s here" as apposed to a integer which says "I've got 16324 1s".

32 bit float contains about 24 or 25 bits of precision. SO when you start summing audio scaled by volume sliders and reverb trails you get a lot of fractional results from your original 24 bit inputs streams. It's all lopped off, rounded down, dithered to 16 or 24 bits when you hear it so what's the difference?

Difference is a 32 bit number might sum everything together as 15234.442352 where as the double float would arrive at a more accurate 15234.52352983948394823 with less rounding errors during summation. The resulting 16 bit sample would then be rounded to 15235 instead of 15234, which would be what the less accurate 32 bit floating bus would arrive at.

I'm sure you could mathmaticall calculate a typical worst case scenario, I have no idea what it is. I assume it only amounts to a small amount of white noise and wheather that means anything you can hear in a signal that sounds best once white noise has been added to it on purpose half the time is debatable, but 64 bit is more accurate.
 
64 bit cpus should, if the app supports the proper instructions, calculate 64 bit floats faster than 32 bit cpus. It's a register thing I think. A 32 bit cpu has to juggle crap around internally to process a 64 bit number, a 64 bit cpu should be able to just stuff it somewhere and process it in one bite. I expect that address space is dictated by register size, but 64 bit registers should also mean native processing of 64 bit pieces of data.
 
64 bit cpus should, if the app supports the proper instructions, calculate 64 bit floats faster than 32 bit cpus. It's a register thing I think. A 32 bit cpu has to juggle crap around internally to process a 64 bit number, a 64 bit cpu should be able to just stuff it somewhere and process it in one bite. I expect that address space is dictated by register size, but 64 bit registers should also mean native processing of 64 bit pieces of data.

No, you're thinking of integer math. A 64-bit CPU can do 64-bit integer calculations faster because the general-purpose register width (used for both addresses and integer math) is twice as wide, so it can do a 64-bit integer math in a single instruction. On 32-bit CPUs, it takes several instructions (for an add, for example, it takes at least four: an add of the low byte, a branch if carry clear, an increment of one of the high bytes (only done if carry is set from the earlier math), and an add of the high bytes. That's assuming that you have enough registers like you would on PowerPC. Not sure what it would be on Intel CPUs.

For floating-point math, the only advantage is that there are twice as many floating-point registers on x86-64 as on IA32. The width of those registers is the same (80 bits on Intel, 64 bits on PowerPC) in 64-bit CPU variants as in 32-bit variants, and on most non-Intel architectures, the number of registers is also the same.
 
Ya, that sounds right, so the crunching is more or less the same o floats.

I'd guess there's some wider plumbing somewhere that helps a bit.
 
Ya, that sounds right, so the crunching is more or less the same o floats.

I'd guess there's some wider plumbing somewhere that helps a bit.

The larger number of registers can help, depending on what you're doing with floating point. It just doesn't always help.

Specifically, if the extra floating-point registers mean that some series of operations can happen entirely in registers instead of having to go out to RAM (or even L2 cache) because it ran out of unused registers, you'll see a speedup from the extra registers.
 
Back
Top