I have a chunk of lua code that I'd like to be able to (selectively) ignore. I don't have the option of not reading it in and sometimes I'd like it to be processed, sometimes not, so I can't just comment it out (that is, there's a whole bunch of blocks of code and I either have the option of reading none of them or reading all of them). I came up with two ways to implement this (there may well be more - I'm very much a beginner): either enclose the code in a function and then call or not call the function (and once I'm sure I'm passed the point where I would call the function, I can set it to nil to free up the memory) or enclose the code in an if ... end block. The former has slight advantages in that there are several of these blocks and using the former method makes it easier for one block to load another even if the main program didn't request it, but the latter seems the more efficient. However, not knowing much, I don't know if the efficiency saving is worth it.
So how much more efficient is:
if false then
-- a few hundred lines
end
than
throwaway = function ()
-- a few hundred lines
end
throwaway = nil -- to ensure that both methods leave me in the same state after garbage collection
?
If it depends a lot on the lua implementation, how big would the "few hundred lines" need to be to reliably spot the difference, and what sort of stuff should it include to best test (the main use of the blocks is to define a load of possibly useful functions)?
Lua's not smart enough to dump the code for the function, so you're not going to save any memory.
In terms of speed, you're talking about a different of nanoseconds which happens once per program execution. It's harming your efficiency to worry about this, which has virtually no relevance to actual performance. Write the code that you feel expresses your intent most clearly, without trying to be clever. If you run into performance issues, it's going to be a million miles away from this decision.
If you want to save memory, which is understandable on a mobile platform, you could put your conditional code in it's own module and never load it at all of not needed (if your framework supports it; e.g. MOAI does, Corona doesn't).
If there is really a lot of unused code, you can define it as a collection of Strings and loadstring() it when needed. Storing functions as strings will reduce the initial compile time, however of most functions the string representation probably takes up more memory than it's compiled form and what you save when compiling is probably not significant before a few thousand lines... Just saying.
If you put this code in a table, you could compile it transparently through a metatable for minimal performance impact on repeated calls.
Example code
local code_uncompiled = {
f = [=[
local x, y = ...;
return x+y;
]=]
}
code = setmetatable({}, {
__index = function(self, k)
self[k] = assert(loadstring(code_uncompiled[k]));
return self[k];
end
});
local ff = code.f; -- code of x gets compiled here
ff = code.f; -- no compilation here
for i=1, 1000 do
print( ff(2*i, -i) ); -- no compilation here either
print( code.f(2*i, -i) ); -- no compile either, but table access (slower)
end
The beauty of it is that this compiles as needed and you don't really have to waste another thought on it, it's just like storing a function in a table and allows for a lot of flexibility.
Another advantage of this solution is that when the amount of dynamically loaded code gets out of hand, you could transparently change it to load code from external files on demand through the __index function of the metatable. Also, you can mix compiled and uncompiled code by populating the "code" table with "real" functions.
Try the one that makes the code more legible to you first. If it runs fast enough on your target machine, use that.
If it doesn't run fast enough, try the other one.
lua can ignore multiple lines by:
function dostuff()
blabla
faaaaa
--[[
ignore this
and this
maybe this
this as well
]]--
end
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
We all know that premature optimization is the root of all evil because it leads to unreadable/unmaintainable code. Even worse is pessimization, when someone implements an "optimization" because they think it will be faster, but it ends up being slower, as well as being buggy, unmaintainable, etc. What is the most ridiculous example of this that you've seen?
I think the phrase "premature optimization is the root of all evil" is way, way over used. For many projects, it has become an excuse not to take performance into account until late in a project.
This phrase is often a crutch for people to avoid work. I see this phrase used when people should really say "Gee, we really didn't think of that up front and don't have time to deal with it now".
I've seen many more "ridiculous" examples of dumb performance problems than examples of problems introduced due to "pessimization"
Reading the same registry key thousands (or 10's of thousands) of times during program launch.
Loading the same DLL hundreds or thousands of times
Wasting mega bytes of memory by keeping full paths to files needlessly
Not organizing data structures so they take up way more memory than they need
Sizing all strings that store file names or paths to MAX_PATH
Gratuitous polling for thing that have events, callbacks or other notification mechanisms
What I think is a better statement is this: "optimization without measuring and understanding isn't optimization at all - its just random change".
Good Performance work is time consuming - often more so that the development of the feature or component itself.
Databases are pessimization playland.
Favorites include:
Split a table into multiples (by date range, alphabetic range, etc.) because it's "too big".
Create an archive table for retired records, but continue to UNION it with the production table.
Duplicate entire databases by (division/customer/product/etc.)
Resist adding columns to an index because it makes it too big.
Create lots of summary tables because recalculating from raw data is too slow.
Create columns with subfields to save space.
Denormalize into fields-as-an-array.
That's off the top of my head.
I think there is no absolute rule: some things are best optimized upfront, and some are not.
For example, I worked in a company where we received data packets from satellites. Each packet cost a lot of money, so all the data was highly optimized (ie. packed). For example, latitude/longitude was not sent as absolute values (floats), but as offsets relative to the "north-west" corner of a "current" zone. We had to unpack all the data before it could be used. But I think this is not pessimization, it is intelligent optimization to reduce communication costs.
On the other hand, our software architects decided that the unpacked data should be formatted into a very readable XML document, and stored in our database as such (as opposed to having each field stored in a corresponding column). Their idea was that "XML is the future", "disk space is cheap", and "processor is cheap", so there was no need to optimize anything. The result was that our 16-bytes packets were turned into 2kB documents stored in one column, and for even simple queries we had to load megabytes of XML documents in memory! We received over 50 packets per second, so you can imagine how horrible the performance became (BTW, the company went bankrupt).
So again, there is no absolute rule. Yes, sometimes optimization too early is a mistake. But sometimes the "cpu/disk space/memory is cheap" motto is the real root of all evil.
On an old project we inherited some (otherwise excellent) embedded systems programmers who had massive Z-8000 experience.
Our new environment was 32-bit Sparc Solaris.
One of the guys went and changed all ints to shorts to speed up our code, since grabbing 16 bits from RAM was quicker than grabbing 32 bits.
I had to write a demo program to show that grabbing 32-bit values on a 32-bit system was faster than grabbing 16-bit values, and explain that to grab a 16-bit value the CPU had to make a 32-bit wide memory access and then mask out or shift the bits not needed for the 16-bit value.
Oh good Lord, I think I have seen them all. More often than not it is an effort to fix performance problems by someone that is too darn lazy to troubleshoot their way down to the CAUSE of those performance problems or even researching whether there actually IS a performance problem. In many of these cases I wonder if it isn't just a case of that person wanting to try a particular technology and desperately looking for a nail that fits their shiny new hammer.
Here's a recent example:
Data architect comes to me with an elaborate proposal to vertically partition a key table in a fairly large and complex application. He wants to know what type of development effort would be necessary to adjust for the change. The conversation went like this:
Me: Why are you considering this? What is the problem you are trying to solve?
Him: Table X is too wide, we are partitioning it for performance reasons.
Me: What makes you think it is too wide?
Him: The consultant said that is way too many columns to have in one table.
Me: And this is affecting performance?
Him: Yes, users have reported intermittent slowdowns in the XYZ module of the application.
Me: How do you know the width of the table is the source of the problem?
Him: That is the key table used by the XYZ module, and it is like 200 columns. It must be the problem.
Me (Explaining): But module XYZ in particular uses most of the columns in that table, and the columns it uses are unpredictable because the user configures the app to show the data they want to display from that table. It is likely that 95% of the time we'd wind up joining all the tables back together anyway which would hurt performance.
Him: The consultant said it is too wide and we need to change it.
Me: Who is this consultant? I didn't know we hired a consultant, nor did they talk to the development team at all.
Him: Well, we haven't hired them yet. This is part of a proposal they offered, but they insisted we needed to re-architect this database.
Me: Uh huh. So the consultant who sells database re-design services thinks we need a database re-design....
The conversation went on and on like this. Afterward, I took another look at the table in question and determined that it probably could be narrowed with some simple normalization with no need for exotic partitioning strategies. This, of course turned out to be a moot point once I investigated the performance problems (previously unreported) and tracked them down to two factors:
Missing indexes on a few key
columns.
A few rogue data analysts who were periodically
locking key tables
(including the "too-wide" one)
by querying the
production database directly with
MSAccess.
Of course the architect is still pushing for a vertical partitioning of the table hanging on to the "too wide" meta-problem. He even bolstered his case by getting a proposal from another database consultant who was able to determine we needed major design changes to the database without looking at the app or running any performance analysis.
I have seen people using alphadrive-7 to totally incubate CHX-LT. This is an uncommon practice. The more common practice is to initialize the ZT transformer so that bufferication is reduced (due to greater net overload resistance) and create java style bytegraphications.
Totally pessimistic!
Nothing Earth-shattering, I admit, but I've caught people using StringBuffer to concatenate Strings outside of a loop in Java. It was something simple like turning
String msg = "Count = " + count + " of " + total + ".";
into
StringBuffer sb = new StringBuffer("Count = ");
sb.append(count);
sb.append(" of ");
sb.append(total);
sb.append(".");
String msg = sb.toString();
It used to be quite common practice to use the technique in a loop, because it was measurably faster. The thing is, StringBuffer is synchronized, so there's actually extra overhead if you're only concatenating a few Strings. (Not to mention that the difference is absolutely trivial on this scale.) Two other points about this practice:
StringBuilder is unsynchronized, so should be preferred over StringBuffer in cases where your code can't be called from multiple threads.
Modern Java compilers will turn readable String concatenation into optimized bytecode for you when it's appropriate anyway.
I once saw a MSSQL database that used a 'Root' table. The Root table had four columns: GUID (uniqueidentifier), ID (int), LastModDate (datetime), and CreateDate (datetime). All tables in the database were Foreign Key'd to the Root table. Whenever a new row was created in any table in the db, you had to use a couple of stored procedures to insert an entry in the Root table before you could get to the actual table you cared about (rather than the database doing the job for you with a few triggers simple triggers).
This created a mess of useless overheard and headaches, required anything written on top of it to use sprocs (and eliminating my hopes of introducing LINQ to the company. It was possible but just not worth the headache), and to top it off didn't even accomplish what it was supposed to do.
The developer that chose this path defended it under the assumption that this saved tons of space because we weren't using Guids on the tables themselves (but...isn't a GUID generated in the Root table for every row we make?), improved performance somehow, and made it "easy" to audit changes to the database.
Oh, and the database diagram looked like a mutant spider from hell.
How about POBI -- pessimization obviously by intent?
Collegue of mine in the 90s was tired of getting kicked in the ass by the CEO just because the CEO spent the first day of every ERP software (a custom one) release with locating performance issues in the new functionalities. Even if the new functionalities crunched gigabytes and made the impossible possible, he always found some detail, or even seemingly major issue, to whine upon. He believed to know a lot about programming and got his kicks by kicking programmer asses.
Due to the incompetent nature of the criticism (he was a CEO, not an IT guy), my collegue never managed to get it right. If you do not have a performance problem, you cannot eliminate it...
Until for one release, he put a lot of Delay (200) function calls (it was Delphi) into the new code.
It took just 20 minutes after go-live, and he was ordered to appear in the CEO's office to fetch his overdue insults in person.
Only unusual thing so far was my collegues mute when he returned, smiling, joking, going out for a BigMac or two while he normally would kick tables, flame about the CEO and the company, and spend the rest of the day turned down to death.
Naturally, my collegue now rested for one or two days at his desk, improving his aiming skills in Quake -- then on the second or third day he deleted the Delay calls, rebuilt and released an "emergency patch" of which he spread the word that he had spent 2 days and 1 night to fix the performance holes.
This was the first (and only) time that evil CEO said "great job!" to him. That's all that counts, right?
This was real POBI.
But it also is a kind of social process optimization, so it's 100% ok.
I think.
"Database Independence". This meant no stored procs, triggers, etc - not even any foreign keys.
var stringBuilder = new StringBuilder();
stringBuilder.Append(myObj.a + myObj.b + myObj.c + myObj.d);
string cat = stringBuilder.ToString();
Best use of a StringBuilder I've ever seen.
Using a regex to split a string when a simple string.split suffices
Very late to this thread I know, but I saw this recently:
bool isFinished = GetIsFinished();
switch (isFinished)
{
case true:
DoFinish();
break;
case false:
DoNextStep();
break;
default:
DoNextStep();
}
Y'know, just in case a boolean had some extra values...
Worst example I can think of is an internal database at my company containing information on all employees. It gets a nightly update from HR and has an ASP.NET web service on top. Many other apps use the web service to populate things like search/dropdown fields.
The pessimism is that the developer thought that repeated calls to the web service would be too slow to make repeated SQL queries. So what did he do? The application start event reads in the entire database and converts it all to objects in memory, stored indefinitely until the app pool is recycled. This code was so slow, it would take 15 minutes to load in less than 2000 employees. If you inadvertently recycled the app pool during the day, it could take 30 minutes or more, because each web service request would start multiple concurrent reloads. For this reason, new hires wouldn't appear in the database the first day when their account was created and therefore would not be able to access most internal apps on their first couple days, twiddling their thumbs.
The second level of pessimism is that the development manager doesn't want to touch it for fear of breaking dependent applications, but yet we continue to have sporadic company-wide outages of critical applications due to poor design of such a simple component.
No one seems to have mentioned sorting, so I will.
Several different times, I've discovered that someone had hand-crafted a bubblesort, because the situation "didn't require" a call to the "too fancy" quicksort algorithm that already existed. The developer was satisified when their handcrafted bubblesort worked well enough on the ten rows of data that they're using for testing. It didn't go over quite as well after the customer had added a couple of thousand rows.
I once worked on an app that was full of code like this:
1 tuple *FindTuple( DataSet *set, int target ) {
2 tuple *found = null;
3 tuple *curr = GetFirstTupleOfSet(set);
4 while (curr) {
5 if (curr->id == target)
6 found = curr;
7 curr = GetNextTuple(curr);
8 }
9 return found;
10 }
Simply removing found, returning null at the end, and changing the sixth line to:
return curr;
Doubled the app performance.
I once had to attempt to modify code that included these gems in the Constants class
public static String COMMA_DELIMINATOR=",";
public static String COMMA_SPACE_DELIMINATOR=", ";
public static String COLIN_DELIMINATOR=":";
Each of these were used multiple times in the rest of the application for different purposes. COMMA_DELIMINATOR littered the code with over 200 uses in 8 different packages.
The big all time number one which I run into time and time again in inhouse software:
Not using the features of the DBMS for "portability" reasons because "we might want to switch to another vendor later".
Read my lips. For any inhouse work: IT WILL NOT HAPPEN!
I had a co-worker who was trying to outwit our C compiler's optimizer and routine rewrote code that only he could read. One of his favorite tricks was changing a readable method like (making up some code):
int some_method(int input1, int input2) {
int x;
if (input1 == -1) {
return 0;
}
if (input1 == input2) {
return input1;
}
... a long expression here ...
return x;
}
into this:
int some_method() {
return (input == -1) ? 0 : (input1 == input2) ? input 1 :
... a long expression ...
... a long expression ...
... a long expression ...
}
That is, the first line of a once-readable method would become "return" and all other logic would be replace by deeply nested terniary expressions. When you tried to argue about how this was unmaintainable, he would point to the fact that the assembly output of his method was three or four assembly instructions shorter. It wasn't necessarily any faster but it was always a tiny bit shorter. This was an embedded system where memory usage occasionally did matter, but there were far easier optimizations that could have been made than this that would have left the code readable.
Then, after this, for some reason he decided that ptr->structElement was too unreadable, so he started changing all of these into (*ptr).structElement on the theory that it was more readable and faster as well.
Turning readable code into unreadable code for at the most a 1% improvement, and sometimes actually slower code.
In one of my first jobs as a full-fledged developer, I took over a project for a program that was suffering scaling issues. It would work reasonably well on small data sets, but would completely crash when given large quantities of data.
As I dug in, I found that the original programmer sought to speed things up by parallelizing the analysis - launching a new thread for each additional data source. However, he'd made a mistake in that all threads required a shared resource, on which they were deadlocking. Of course, all benefits of concurrency disappeared. Moreover it crashed most systems to launch 100+ threads only to have all but one of them lock. My beefy dev machine was an exception in that it churned through a 150-source dataset in around 6 hours.
So to fix it, I removed the multi-threading components and cleaned up the I/O. With no other changes, execution time on the 150-source dataset dropped below 10 minutes on my machine, and from infinity to under half an hour on the average company machine.
I suppose I could offer this gem:
unsigned long isqrt(unsigned long value)
{
unsigned long tmp = 1, root = 0;
#define ISQRT_INNER(shift) \
{ \
if (value >= (tmp = ((root << 1) + (1 << (shift))) << (shift))) \
{ \
root += 1 << shift; \
value -= tmp; \
} \
}
// Find out how many bytes our value uses
// so we don't do any uneeded work.
if (value & 0xffff0000)
{
if ((value & 0xff000000) == 0)
tmp = 3;
else
tmp = 4;
}
else if (value & 0x0000ff00)
tmp = 2;
switch (tmp)
{
case 4:
ISQRT_INNER(15);
ISQRT_INNER(14);
ISQRT_INNER(13);
ISQRT_INNER(12);
case 3:
ISQRT_INNER(11);
ISQRT_INNER(10);
ISQRT_INNER( 9);
ISQRT_INNER( 8);
case 2:
ISQRT_INNER( 7);
ISQRT_INNER( 6);
ISQRT_INNER( 5);
ISQRT_INNER( 4);
case 1:
ISQRT_INNER( 3);
ISQRT_INNER( 2);
ISQRT_INNER( 1);
ISQRT_INNER( 0);
}
#undef ISQRT_INNER
return root;
}
Since the square-root was calculated at a very sensitive place, I got the task of looking into a way to make it faster. This small refactoring reduced the execution time by a third (for the combination of hardware and compiler used, YMMV):
unsigned long isqrt(unsigned long value)
{
unsigned long tmp = 1, root = 0;
#define ISQRT_INNER(shift) \
{ \
if (value >= (tmp = ((root << 1) + (1 << (shift))) << (shift))) \
{ \
root += 1 << shift; \
value -= tmp; \
} \
}
ISQRT_INNER (15);
ISQRT_INNER (14);
ISQRT_INNER (13);
ISQRT_INNER (12);
ISQRT_INNER (11);
ISQRT_INNER (10);
ISQRT_INNER ( 9);
ISQRT_INNER ( 8);
ISQRT_INNER ( 7);
ISQRT_INNER ( 6);
ISQRT_INNER ( 5);
ISQRT_INNER ( 4);
ISQRT_INNER ( 3);
ISQRT_INNER ( 2);
ISQRT_INNER ( 1);
ISQRT_INNER ( 0);
#undef ISQRT_INNER
return root;
}
Of course there are both faster AND better ways to do this, but I think it's a pretty neat example of a pessimization.
Edit: Come to think of it, the unrolled loop was actually also a neat pessimization. Digging though the version control, I can present the second stage of refactoring as well, which performed even better than the above:
unsigned long isqrt(unsigned long value)
{
unsigned long tmp = 1 << 30, root = 0;
while (tmp != 0)
{
if (value >= root + tmp) {
value -= root + tmp;
root += tmp << 1;
}
root >>= 1;
tmp >>= 2;
}
return root;
}
This is exactly the same algorithm, albeit a slightly different implementation, so I suppose it qualifies.
This might be at a higher level that what you were after, but fixing it (if you're allowed) also involves a higher level of pain:
Insisting on hand rolling an Object Relationship Manager / Data Access Layer instead of using one of the established, tested, mature libraries out there (even after they've been pointed out to you).
All foreign-key constraints were removed from a database, because otherwise there would be so many errors.
This doesn't exactly fit the question, but I'll mention it anyway a cautionary tale. I was working on a distributed app that was running slowly, and flew down to DC to sit in on a meeting primarily aimed at solving the problem. The project lead started to outline a re-architecture aimed at resolving the delay. I volunteered that I had taken some measurements over the weekend that isolated the bottleneck to a single method. It turned out there was a missing record on a local lookup, causing the application to have to go to a remote server on every transaction. By adding the record back to the local store, the delay was eliminated - problem solved. Note the re-architecture wouldn't have fixed the problem.
Checking before EVERY javascript operation whether the object you are operating upon exists.
if (myObj) { //or its evil cousin, if (myObj != null) {
label.text = myObj.value;
// we know label exists because it has already been
// checked in a big if block somewhere at the top
}
My problem with this type of code is nobody seems to care what if it doesn't exist? Just do nothing? Don't give the feedback to the user?
I agree that the Object expected errors are annoying, but this is not the best solution for that.
How about YAGNI extremism. It is a form of premature pessimization. It seems like anytime you apply YAGNI, then you end up needing it, resulting in 10 times the effort to add it than if you had added it in the beginning. If you create a successful program then odds are YOU ARE GOING TO NEED IT. If you are used to creating programs whose life runs out quickly then continue to practice YAGNI because then I suppose YAGNI.
Not exactly premature optimisation - but certainly misguided - this was read on the BBC website, from an article discussing Windows 7.
Mr Curran said that the Microsoft Windows team had been poring over every aspect of the operating system to make improvements.
"We were able to shave 400 milliseconds off the shutdown time by slightly trimming the WAV file shutdown music.
Now, I haven't tried Windows 7 yet, so I might be wrong, but I'm willing to bet that there are other issues in there that are more important than how long it takes to shut-down. After all, once I see the 'Shutting down Windows' message, the monitor is turned off and I'm walking away - how does that 400 milliseconds benefit me?
Someone in my department once wrote a string class. An interface like CString, but without the Windows dependence.
One "optimization" they did was to not allocate any more memory than necessary. Apparently not realizing that the reason classes like std::string do allocate excess memory is so that a sequence of += operations can run in O(n) time.
Instead, every single += call forced a reallocation, which turned repeated appends into an O(n²) Schlemiel the Painter's algorithm.
An ex-coworker of mine (a s.o.a.b., actually) was assigned to build a new module for our Java ERP that should have collected and analyzed customers' data (retail industry). He decided to split EVERY Calendar/Datetime field in its components (seconds, minutes, hours, day, month, year, day of week, bimester, trimester (!)) because "how else would I query for 'every monday'?"
No offense to anyone, but I just graded an assignment (java) that had this
import java.lang.*;