Steenamaroo
...
First, let me thank everyone for having such a civil and informative discussion. From what I've seen, this is not always the case on this HR forum.
So, I work in the area of high end massively parallel compute systems (i.e., supercomputing). When people benchmark systems in this area, they rely on various benchmarks (google Jack Dongarra for more information) that focus on how many floating point operations one can perform per second. That's fine for bragging rights but not always a good predictor of performance for different systems acquired to perform certain tasks. That is, DoD systems focus on different algorithms than NSF systems than do NIH systems than do transaction based systems etc.
In fact, another point of confusion is efficient utilization of resources vs an efficient algorithm (to a point earlier). So, when teaching a 3rd year graduate course in massively parallel algorithms my first project often concludes with the majority of the class keeping all available processors very busy, compared to the small minority who recognize that by utilizing far fewer processors that the algorithm will run faster, which is the ultimate goal. That is, keeping the processors 95% utilized is not necessarily a predictor of the efficiency of the software, which is what matters.
Just food for thought.
If I understand it's a bit like using file transfer stats to compare hard drives when, in real world use, there's a hell of a lot more to it than that.
Going back a little bit, the single/multicore thing is well illustrated here.
Ok, this is still a generic benchmark thing, but it shows that the 'fastest' computer may have considerably less grunt per core than an other machine.