I've created a function which hashes a file (~2GB) and outputs it into a another file, however buffer size appears to correlate directly with the speed at which this is calculated. When the buffer set to [1024] it is noticeably slower than when it is set to [1048576] for example.
Increasing the buffer beyond [1048576] slows this down however, and I wondered why this was so?
It appears that 1MB is the ideal size, I'm just not sure why! Thank you in advance.