Uh... no. It's not 32-bit integer. It's 32-bit float. Converting from 24-bit integer to 32-bit float is definitely a non-zero amount of work. To avoid precision loss, you first must convert the integer to a double (64-bit float), then multiply by a scaling factor, then reduce the double to a 32-bit float. It's not a huge amount of computation, but it is probably at least twenty or thirty clock cycles or so per sample. (I'm not going to sit down and write out the instruction stream to get an accurate estimate, but it is a fair number of instructions, most of which take multiple clock cycles even if the data is already in a cache line.)
That said, you have to weigh the conversion against the I/O cost. 32-bit float is going to take 33% more I/O bandwidth to your hard drive, and that costs a bit in terms of CPU overhead. In the grand scheme of things, I'd expect the I/O overhead from staying in 32-bit float to be greater than the overhead caused by conversion if I were guessing.... In any case, which one is faster would be machine-dependent. A machine whose I/O uses relatively little CPU (SATA) might do better with 32-bit float files (not sure); a machine doing I/O with a piggish bus (USB) would almost certainly be better off using 24-bit files and doing the conversion.