The large cache size makes read write operations from drive to board appear faster when large file transfers are taking place. For you it wont make much difference. here is why :-
A cache works by reading a block of information from the drive surface to a very fast but small piece of ram ( known as cache ram ) .
When your cpu wants the information from the drive it asks for "data" from "drivecluster" , the cache reads this plus all of the contigeous data that is surrounding that cluster so that the chances are that when the next call to read data from the drive comes , it will already be in cache as most read operations from anykind of storage tend to be stored contigeously ( next to the last read address ). this means the drive will not actually have to access the platter , but gets it from the cache memory giving the effect of speeding up drive read/write functions .
The platter assembly is mechanical , hence it is very slow to respond compared to the cache which is a solid state chip and has no moving parts.
The cache also stores information that has been processed back to the platter in it's own time, this means the cpu is freed up to do other tasks and doesnt have to wait for the data to be written to the platter ( which is very slow as it is a mechanism rather than solid state).
small file transfers do not really benefit from this as the files to to be scattered all over the available drive platter so the chance of getting a "cache hit" when a piece of data is requested is much smaller because the small files are not stored contigeously on the hard drive surface.
This is why drive performance suffers so badley when you have large file fragmentation , and when you defragment a hard drive you rearrange the file system so that everything once again becomes contigeous and the cache can work at maximum efficiency!
