Like many kids that grew up to be a computer jockey, I had a Commodore 64, which my father actually won on a business trip. It was awesome. Sure, I played a lot of games, but I also learned what a nested loop was in 5th grade. So, recall the C64.
Now, also recall/realize computer speed is measured in "FLOPS," or floating point operations per second. A floating point is just a number with a decimal part, such as 1.234. An example of a floating point operation would be 4.111 + 2.333 = 6.444. (If this was a long, dull article in a journal, I would put here an explanation of why this isn't a perfect measurement of performance and why measuring FLOPS is done with linear algebra packages, but it ain't, so I won't.) FLOPS are used as a measure of computer speed, with the more FLOPS, the faster the machine. The top supercomputer on earth (this month) has just cracked the petaFLOPS barrier, at 1.105 pFLOPS
There were 30,000,000 C64 computer built, and they ran at 320 FLOPS. If every C64 ever built was suddenly unleashed on a problem, it would take that great machine of brown plastic and frustration 32 hours to equal the computational work done every second by the current state of the art supercomputer.
Pretty cool, eh?
1. The basic idea, supercomputer time in units of every C64 ever built, comes from a /. post this morning.
1 comment:
Yeah, right, 100% scalability w/ 30,000,000 nodes that would be awesome.
Post a Comment