The same way that students are taught to calculate what their theoretical yield should be in chemistry, CS students should be taught te calculato the theoretical number of cycles any particular operation needs to perform, and then treat getting close to that as seriously as chemists do.
Sometimes your performance will be low because of all the incidental crap you have to take care of. But for a lot of things we do on computers there just isn't any excuse.


Mark ☑️
in reply to Zeke • • •They used to do that - both the time needed per operation and algorithm efficiency (is it order N^2, etc.).
I have a PDP-10 assembly language book that had the execution time of each instruction and some took more or less time based on the arguments.
Knuth certainly looked at algorithm efficiency but I have other books that did the same.
One symbol table function I fixed because it did a linear search of an unsorted list. SMH.
Zeke
in reply to Mark ☑️ • • •@mhjohnson I'm not talking about Big O. Big O is hand-wavium about algorithm complexity (and is probably taught incorrectly, but that's a different story). I mean hard calculations about "for each of these 1,000,000 entitiies we need 6 multiplies, 9 adds, 1 exponentiation, 2 pointer indirections, so the solution should run in X milliseconds at 2.4 GHz.
That way when some yahoo decides to OOP all over the thing you could call out how inefficient the implementation is.
Mark ☑️
in reply to Zeke • • •I have books that do that. Like comparing matrix multiplication for navigation when you should be using quaternions due to the efficiency.
Another thing I learned way back when is how to unroll loops to reduce overhead. Yeah - compilers can sort of do that but often don't. With your 1M iteration example, it can be faster to unroll the loop to 10K by doing 100 at a time. Fits easily in the cache, etc.
Zeke
Unknown parent • • •Nanook
in reply to Zeke • •