After I ported some legacy code from win32 to win64, after I discussed what was the best strategy to remove the warning "possible loss of data" (What's the best strategy to get rid of "warning C4267 possible loss of data"?). I'm about to replace many unsigned int by size_t in my code.
However, my code is critical in term of performance (I can't even run it in Debug...too slow).
I did a quick benchmarking:
#include "stdafx.h"
#include <iostream>
#include <chrono>
#include <string>
template<typename T> void testSpeed()
{
auto start = std::chrono::steady_clock::now();
T big = 0;
for ( T i = 0; i != 100000000; ++i )
big *= std::rand();
std::cout << "Elapsed " << std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::steady_clock::now() - start).count() << "ms" << std::endl;
}
int main()
{
testSpeed<size_t>();
testSpeed<unsigned int>();
std::string str;
std::getline( std::cin, str ); // pause
return 0;
}
Compiled for x64, it outputs:
Elapsed 2185ms
Elapsed 2157ms
Compiled for x86, it outputs:
Elapsed 2756ms
Elapsed 2748ms
So apparently using size_t instead of unsigned int has unsignificant performance impact. But is that really always the case (it's hard to benchmark performances this way).
Does/may changing unsigned int into size_t impact CPU performance (now a 64bits object will be manipulated instead of a 32bits)?