Benchmarking Ackermann's Function
Let's run those programs and see what we learn.
Timings (in seconds) for ack(3,12)
| Language | Implementation | Running Time |
|---|---|---|
| C |
gcc
|
3.76 |
| C |
gcc -O4
|
0.35 |
| Java | HotSpot | 2.35 |
| ps11 | interpreted | stack overflow |
| ps11 | compiler A | 37.40 |
| ps11 | compiler B | 3.57 |
| Scheme |
sagittarius
(fixnum arithmetic) |
71.56 |
| Scheme |
sagittarius
(generic arithmetic) |
36.98 |
| Scheme |
larceny
(generic arithmetic) |
2.29 |
| Scheme |
larceny
(fixnum arithmetic) |
2.17 |
(These timings were obtained on a Linux machine with an Intel i7-4790 microprocessor running at 3.6 GHz, equipped with 16 gigabytes of memory. Your mileage may vary.)
What did you learn from these numbers?
- Did you learn that optimized C code runs 10 times as fast as unoptimized C code?
- Did you learn that unoptimized compiled ps11 runs faster than unoptimized compiled C?
- Did you learn that Scheme runs faster than Java?
You should not have learned those things. If you didn't already know that optimized C code is often faster than unoptimized code, or that ps11 programs might sometimes run faster than an equivalent C program, or that Scheme might sometimes run faster than Java, or that Scheme's generic arithmetic might be faster than fixnum arithmetic in some systems, then you might have learned something from the table above. Beyond that, there isn't much to learn from that table.
You would do well to remember what Kernighan and Van Wyk wrote:
There seems to be little hope of predicting performance in other than a most general way; if there is a single clear conclusion, it is that no benchmark result should ever be taken at face value.