Resolution and update intervals of different time fetching functions (last update: 2012-12-26, created: 2012-12-26) back to the list ↑
There are several methods for acquiring time (or actually time-delta which I'm going to focus on) in your code. Some methods are language specific (e.g. the time() or clock() functions in C/C++), other are OS specific (like GetTickCount() on Windows or clock_gettime() on GNU/Linux); some are provided by different libraries (like SDL_GetTicks() from libsdl), and lastly some are just direct CPU (or other hardware) queries (like the rdtsc instruction on x86).

Each method can be described using these two important characteristics:
1. The units of returned value or resolution of acquired time, e.g. miliseconds or CPU cycles.
2. The precision or update interval, i.e. what's the minimal returned difference - this one isn't obvious for newer programmers since one could expect a function returning miliseconds would have it's internal time counter updated at least every milisecond. This isn't the case; a function returning miliseconds can actually have it's internal time counter updates e.g. every 50 ms.

Below I present a table with said characteristics of several methods - their resolution (units) and update interval (precision) in both the given units and miliseconds for easier comparison.

Please keep in mind that in the end every time fetching method is bounded by the features of whatever hardware-supported method it uses. Out of the top of my head I can point to these hardware features that allow one to get some kind of a time related value:
Also, please note this table is by no means exhaustive and the values were mostly determined experimentally. So please treat it as a simplified map of functions rather than a 100% correct reference.



All tests were done by similar looking programs. E.g. for clock() I've used this code:

#include <stdio.h>
#include <time.h>

#define TEST_NO 50

int
main(void)
{
  int cnt = TEST_NO;

  struct item {
    clock_t curr, test;
    size_t iters;
  } items[TEST_NO];

  int i = 0;
  while(cnt --> 0) {
    size_t iters = 0;
    clock_t curr, test;
    curr = clock();
    do {
      test = clock();
      iters++;
    } while(curr == test);

    items[i].curr = curr;
    items[i].test = test;
    items[i].iters = iters;
    i++;
  }

  for(i = 0; i < TEST_NO; i++) {
    size_t iters = items[i].iters;
    clock_t curr = items[i].curr, test = items[i].test;

    printf("%u -> %u (low diff: %u) -- after %u iters\n",
      curr, test, test - curr,
      iters);
  }

  return 0;
}

The tests were done on a couple of different computers, starting from an old Intel Pentium 4, to quite new (at the time of writing these words) Intel Xeon and Intel Core i7. A couple of tests were done one a VM, but only if I was sure that wouldn't change the results (the "after %u iters" in the print out is a good indicator of errors btw).

If you have any suggestions on what functions should I add to this list, or better, you already did the tests and would like to share you results, let me know.
【 design & art by Xa / Gynvael Coldwind 】 【 logo font (birdman regular) by utopiafonts / Dale Harris 】