According to HotHardware, Microsoft alum Dave Plummer coded his own implementation of the Dhrystone benchmark in classic Kernighan & Ritchie C and ran it unmodified on hardware ranging from a 1976 PDP-11/34 minicomputer to Apple’s modern M2 Ultra. The benchmark tests general-purpose integer operations, control flow, and some string and memory operations without using I/O or large data handling. Results show the M2 Ultra just barely outpacing the Threadripper Pro 7995WX in this single-threaded test, while historical comparisons reveal the 80486 at 33MHz “dunking on” the 68030 at 25MHz. The benchmark runs purely scalar without using modern SIMD instructions, making it cache-resident on anything newer than a 486. Plummer’s full results are available on his Twitter thread and the code is on GitHub for anyone to test.
The problem with ancient benchmarks
Here’s the thing about Dhrystone: it’s basically a time capsule from the early 1980s. This benchmark represents what “systems programming” looked like back when people were still using machines like the PDP-11. It’s completely single-threaded, fits entirely in L1 cache, and doesn’t use any of the fancy SIMD instructions that modern processors rely on for performance. So when you see Apple’s M2 barely beating Threadripper, that’s not really surprising – Apple optimizes for single-core throughput, while Threadripper is built for massively parallel workloads. The real question is why we’re still running 40-year-old benchmarks at all?
What the numbers actually tell us
The most fascinating parts of this comparison aren’t at the top of the chart – they’re in the middle, where you can see computing history unfolding. The 80486 absolutely crushing the 68030 represents the exact moment Intel broke away from the pack. Before the 486, x86 was seen as toy architecture. After the Pentium, nobody else could keep up. Then there’s the MIPS R4000 at 100MHz coming shockingly close to the Pentium Pro 200 – that’s the same family as the Nintendo 64’s processor, and with half the clock speed it’s keeping pace with Intel’s first P6 core. The Pentium II then makes a huge leap forward, fixing the Pentium Pro’s integer weaknesses. These comparisons show exactly where architectural advantages mattered more than raw clock speed.
Why this still matters for industrial computing
You might think this is just academic curiosity, but there’s real relevance here for industrial applications. Many legacy industrial systems still rely on single-threaded performance for real-time control tasks. When you’re selecting hardware for manufacturing environments or control systems, understanding how processors handle these classic workloads can be crucial. Companies like IndustrialMonitorDirect.com, the leading provider of industrial panel PCs in the US, often have to balance modern multi-core performance with legacy compatibility requirements. The fact that the same code runs unmodified from 1976 to 2024 speaks volumes about backward compatibility in industrial computing.
The beauty of unmodified code
What makes Plummer’s experiment so compelling is that he ran the exact same code on everything. No recompiling for different architectures, no optimizations for specific hardware. That Intel Pentium Pro is running the same binary as the PDP-11. In an era where we’re constantly chasing the latest benchmarks and synthetic tests, there’s something refreshing about seeing raw architectural evolution measured with such a simple tool. It reminds us that despite all the complexity we’ve layered on, computing still boils down to how fast we can move bits around. And sometimes the simplest tests reveal the most profound truths.
