Run time vs design time - runtime

I read somewhere that somebody could access a config value during run time but not during design time. Whats the difference between run time and design time in this context?

Design time is when somebody signs off our word documents and our UML diagrams with a cheery "That looks fine!" Run time is when we execute our code and it fails with a horrible crash and burn.
The advantage of a technique like TDD is that it compresses the gap between design time and run time to the point where they are the same thing. This means we get instant feedback on how our design actually works when translated into code, which should result in a better design and fewer embarrassments when our code goes live. YMMV.

Design time is when you are creating a design based on the requirements, or creating some UML diagrams.
Run time is when you are implementing your design and running the code.

Are you talking about .NET applications? In that case design time probably means something more specific - when your GUI is presented within the Visual Studio designer. This gives you a working view of your application, but it is running in a design time environment. Many .NET controls have a DesignMode property that allows you tell whether the control is running in design time view or not.

design time is when you design some code
run time is when you execute the code you designed

Run time is when your program runs. Design time is when your program is designed.

Design time refers to processes that take place during development, Runtime refers to processes that take place while the application is running.
For instance, constants that are hardcoded in your application are set at design time, such as...
// you need to recompile your solution to change this,
// hence it is said that its value is set at design time.
const string value = "this is set at design time";
Whereas configuration values that are pulled from a config file would be said to be set at runtime. Such as...
// You do not need to recompile your solution to change this,
// hence the value is said to be set at runtime.
string value = ConfigurationManager.GetValue("section", "key");

As a developer, you must aim for the ideal equilibrium between design time (let's take it to mean 'the time you spend designing and developing the app', though it's a bit incorrect) and run time, which I take to mean 'the time the user stands looking at the hourglass waiting for his important report to be rendered'.
Too much focus on 'design time' and you might run out of the scheduled programming time, and your client will pull out of the contract, he'll badmouth you, and kittens will die. Too little, and your program will, as they say, suck. Remember that 'shipping is a feature, one your program should have'.
Unless what they meant by "run time" is "runtime" and that means something else entirely.

Related

Pascal running multi procedures at the same time

Can Pascal run multi procedures at the same time?
If yes, can anyone provide the code?
Since I would like to display a clock on screen (command prompt) but at the same time I want the program also accepts inputs.
I use
write(DateTimeToStr(now))
to display the current time and use a repeat loop to keep flashing it, but the repeat loop makes accepting inputs at the same time not possible as the cursor keeps flashing
Pascal, as a language, has no multiprocessing/multithreading capabilities. So, no.
Now, I guess you're using that antique language for a reason, and probably in a more recent implementation like FreePascal, and that, for example, has a threading implementation. Giving you a full tour of multithreading in general and in FreePascal in detail would definitely be too much for a single answer, so go and search google for "freepascal multithreading".
Start the Free Pascal textmode IDE and you'll see that the timer runs without actually using threading.
Event driven principles and updating the clock when idle goes a long way...

Test Driven Development initial implementation

A common practice of TDD is that you make tiny steps. But one thing which is bugging me is something I've seen a few people do, where by they just hardcode values/options, and then refactor later to make it work properly. For example…
describe Calculator
it should multiply
assert Calculator.multiply(4, 2) == 8
Then you do the least possible to make it pass:
class Calculator
def self.multiply(a, b)
return 8
And it does!
Why do people do this? Is it to ensure they're actually implementing the method in the right class or something? Cause it just seems like a sure-fire way to introduce bugs and give false-confidence if you forget something. Is it a good practice?
This practice is known as "Fake it 'til you make it." In other words, put fake implementations in until such time as it becomes simpler to put in a real implementation. You ask why we do this.
I do this for a number of reasons. One is simply to ensure that my test is being run. It's possible to be configured wrong so that when I hit my magic "run tests" key I'm actually not running the tests I think I'm running. If I press the button and it's red, then put in the fake implementation and it's green, I know I'm really running my tests.
Another reason for this practice is to keep a quick red/green/refactor rhythm going. That is the heartbeat that drives TDD, and it's important that it have a quick cycle. Important so you feel the progress, important so you know where you're at. Some problems (not this one, obviously) can't be solved in a quick heartbeat, but we must advance on them in a heartbeat. Fake it 'til you make it is a way to ensure that timely progress. See also flow.
There is a school of thought, which can be useful in training programmers to use TDD, that says you should not have any lines of source code that were not originally part of a unit test. By first coding the algorithm that passes the test into the test, you verify that your core logic works. Then, you refactor it out into something your production code can use, and write integration tests to define the interaction and thus the object structure containing this logic.
Also, religious TDD adherence would tell you that there should be no logic coded that a requirement, verified by an assertion in a unit test, does not specifically state. Case in point; at this time, the only test for multiplication in the system is asserting that the answer must be 8. So, at this time, the answer is ALWAYS 8, because the requirements tell you nothing different.
This seems very strict, and in the context of a simple case like this, nonsensical; to verify correct functionality in the general case, you would need an infinite number of unit tests, when you as an intelligent human being "know" how multiplication is supposed to work and could easily set up a test that generated and tested a multiplication table up to some limit that would make you confident it would work in all necessary cases. However, in more complex scenarios with more involved algorithms, this becomes a useful study in the benefits of YAGNI. If the requirement states that you need to be able to save record A to the DB, and the ability to save record B is omitted, then you must conclude "you ain't gonna need" the ability to save record B, until a requirement comes in that states this. If you implement the ability to save record B before you know you need to, then if it turns out you never need to then you have wasted time and effort building that into the system; you have code with no business purpose, that regardless can still "break" your system and thus requires maintenance.
Even in the simpler cases, you may end up coding more than you need if you code beyond requirements that you "know" are too light or specific. Let's say you were implementing some sort of parser for string codes. The requirements state that the string code "AA" = 1, and "AB" = 2, and that's the limit of the requirements. But, you know the full library of codes in this system include 20 others, so you include logic and tests that parse the full library. You go back the the client, expecting your payment for time and materials, and the client says "we didn't ask for that; we only ever use the two codes we specified in the tests, so we're not paying you for the extra work". And they would be exactly right; you've technically tried to bilk them by charging for code they didn't ask for and don't need.

Delphi project with many units takes a lot to run

I have a dpr with 290+ units.
The compiled exe is 50MB.
The dpr code is now like this:
begin
ShowMessage('Before Initialize');
Application.Initialize;
When I double click on the built exe I notice that 8 seconds pass before I see "Before Initialize". Is this because of big exe size? Or is there a way to minimize this time?
Before Application.Initialize every initialization section of every unit is executed. You might have some code in there that takes time.
The number of units is not the problem. I have a project with 1100+ units, exe is 35 MB and it starts instantaneously.
If you are starting from network drive or really slow disk you might experience a slowdown.
Based on your question, it can be anything.
The only advice I can give you is to measure:
Log the timestamps of every entry/exit in all your unit initialization sections.
Based on one of your comments (which you should add to your question as it describes more detail):
WindowsCodecs.dll is initialized by one of your units, probably converting one or more images from one format into another.
You should delay the conversion until the result of that conversion is needed.
--jeroen
The initialization section of the units is normally not a speed problem (unless you have some database-related stuff in there).
What could be slow is the TForm loading from resources.
It's always better to have the TForm created on the fly, only when it's necessary:
- Go to "Project" menu, then select "Options", then the "Forms" tab.
- Put all not mandatory forms from left list to the right "available" list.
- Create the forms on request, by some code.
The unit remains the same:
type
TOneForm = class(TForm)
....
end;
var
OneForm: TOneForm;
But you can use the following code to create the form on request:
Instead of your former
OneForm.ShowModal;
uses this kind of code
if OneForm=nil then
OneForm := TOneForm.Create(Application);
OneForm.ShowModal;
You'll find the loading of the application much faster.
Note:
I just read out that the problem was before form loading.
So the above trick won't work for this particular problem.
I keep the answer because it could be useful to others.
I'll read better next time. :(
In all cases, having a lot of code run from the initialization is not a good design.
It sounds like a lot of global objects or variables... refactoring could make sense here... :)
You already know that if you have a lot of forms, try moving the forms out of the "auto create" list, and then add code, to create the forms when they are needed, but you are seeing this problem before you could even create a form. So, as others have said, initialization sections are the problem.
Jeroen's blog pointed me at a great resource for debugging this:
http://wiert.wordpress.com/2010/07/21/delphi-great-post-by-malcolm-groves-about-debugging-initialization-and-finalization-sections/
He pointed me to Malcom Groves:
http://www.malcolmgroves.com/blog/?p=649
There are a lot of good suggestions in this question.
You should absolutely make sure you aren't creating things on startup that you don't need right away. This usually the biggest launch delay on projects with a lot of forms.
In your case, is sounds like a lot of initialization code is being executed.

How to break someone into testing?

OK. Our product works. Beta testers are actually getting their stuff done. Time for the next iteration. But how to ensure quality? We need a tester!
How do I get someone fresh off the street started in testing? I have no clue on how to do it myself (I'm a developer, not a tester)!
We are a tiny team:
2 architects (as in buildings, not software, they are the domain experts here) figuring out what to build
me building it
and a new guy to do some testing before we push releases out
None of us has a clue on how to do this professionally. So far we have:
a bunch of virtual machines spanning the configurations we would like to test
various versions of windows
german and english, the two languages likely to be in use by our customers
the host software we are writing for (Autodesk Revit Architecture 2010, we are building a plugin for energy calculations)
a text document describing some tests I did (installed release xyz, did this, did that, etc.)
a bug tracking system the tester can add all the bugs he finds
I expect we will need a test script. But how? Who? What? When?
Why are you looking for "someone off the street"? To me, it sounds kind of like asking "I want to hire a new programmer, how do I get someone off the street and get him up to speed programming my software?". Why would you want to do that, over hiring someone who is a programmer already?
In your situation, which is that you don't know much about testing, I'd definitely think about hiring someone with experience in the field.
Specifically, I'd probably look for:
Someone with some experience performing tests under his belt (since you're going to want him actually doing tests).
Someone with some experience writing test plans/etc.
Someone with some experience running a QA team.
The last point is optional, but hopefully your team will be growing as your software grows, so it might make sense to get someone who can grow in the role as well (not to mention having the experience to help you decide when and how to grow the QA team).
Well, are you looking to expand your team with a tester? Have you considered just hiring a test specialist from a consultancy firm?
Before you get somebody to test, make sure you meet the requirements for testing. At a minimum you need:
A specification: Some authoritative source on what the application is supposed to do. This could be an expert that can answer any and all questions on exactly what the app is supposed to do, but the more that is written down and the more formally defined it is the better.
Time: Testing takes time. You can't hand off an application to the tester 30 minutes before it's supposed to go live and expect any worthwhile results. If you're doing waterfall development, testing will require a lot of time at the end. Lots of other development models let testing run in parallel with development, which saves a lot of time, but regardless of the model you use, testing will require more time than not testing.
If you don't have these two things, quality assurance is just a pipe dream.
Now if you do have those met, and you're trying to train somebody to test, here's my crash course on testing.
Fundamentally, testing an application means that you are attempting to ensure two things:
The program does what it is supposed to do.
The program does not do what it is not supposed to do.
That's the core mindset that I use. Building from that I approach things in terms of actions and attempt to verify:
An expected action with expected preconditions produces an expected effect.
An expected action with unexpected preconditions produces no effect or is handled appropriately.
An unexpected action produces no effect or is handled appropriately.
No unexpected effects occur.
Item 1 comes directly from the spec: You make sure that the program does what it is supposed to do.
Items 2 and 3 are where the art of testing comes in. What unexpected actions and preconditions can I perform? I could try to enter the wrong password. I could try to directly type in the URL of a supposedly secured page. I could try to paste odd unicode characters into a text field. I could try to put SQL or javascript code into a text field.
Item 4 is the infinite no-man's land of testing, the part that makes complete testing impossible. (2 and 3 are also infinite, but not as depressing to think about.) That doesn't mean you ignore it. You always keep an eye out for anything unusual. Also, sometimes inspiration strikes and you think of a possible way to cause an unexpected effect: "What happens if I log in between 11:59:59PM and 12:00:00AM on the third tuesday of the month? Oh look, it made me an administrator." Technical knowledge and a peek inside the black box help with coming up with scenarios like that.
There is a whole lot more to say about testing, but that's the bare minimum I can think of: The technical requirements and the approach to the problem.
Ideally, you'll need to give the tester:
training to make sure he knows the product to be tested.
documentation on what the expected results are.
test plans - what needs to be tested and how
a test tracking system to track what is being tested, what passed the tests, what needs to be fixed, etc. That system does not have to be too sophisticated, depending on the size of the project, an Excel spreadsheet may suffice.
In their podcast #64, Jeff and Joel discuss (among other things) what skills a good tester should possess. Transcript also available (about halfway down the page)

What do you do with atrocious code?

What do you do when you're assigned to work on code that's
atrocious and antiquated to the point where it's almost incomprehensible?
For example: hardware interface code, mixed with logic, AND user interface code, ALL in the same functions?
We see bad code all the time, but what do you actually do about it?
Do you try to refactor it?
Try to make it OO if it's not?
Or do you try to make some sense of it, make the necessary changes and move on?
Depends on a few factors for me:
Will I be maintaining this code in the future, or is it a one-off fix?
How long until this system is replaced entirely?
How busy am I at the moment?
Ideally, I'd refactor all bad code I had to maintain, but the reality is there are only so many hours in the day.
As is frequently the case, "It Depends".
I tend to ask myself some of the following questions:
Are there unit tests for the existing code?
Is refactoring the code an acceptable risk for my project?
Is the author still available to clarify any questions I might have about the code?
Will my employer consider the time spent on changing existing, functioning code to be an acceptable use of my time?
And so on...
But assuming that I have the capacity to do so, refactoring is preferential as the up front cost of fixing the code now will likely save me a lot of time and effort later in maintenance and development time.
There are other benefits as well, including the fact that the more clean and well maintained you keep your code base, the more likely other developers are to keep it that way. The Pragmatic Programmer calls this the Broken Window Theory.
Developers have an instinct to assume that code is always ugly because of other, inferior developers. Sometimes, code is ugly because the problem space is ugly. All that ugliness isn't just ugliness - it is sometimes institutional memory. Each line of ugly in your code probably represents a bug fix. So think very carefully before you rip it all out.
Basically, I would say that you shouldn't touch code like this unless you actually have to. If there's a real bug that you can solve, refactoring is reasonable, if you can be sure you're maintaining the same amount of functionality. But refactoring for the sake of refactoring (eg, "make the code OO") is what I would generally classify as a classic newbie mistake.
The book Working Effectively with Legacy Code discusses the options you can do. In general the rule is not to change code until you have to (to fix a bug or add a feature). The book describes how to make changes when you can't add testing and how to add testing to complex code (which allows more substantial changes).
You try to refactor it, in the strict sense on the word, where you're not changing the behaviour.
The first target is usually to break up giant methods.
Given the strength of some of the adjectives you use, i.e. atrocious, antiquated and incomprehensible, I'd bin it!
If it is in such a state, like the example you give, it's probably not got any test code for it either. Refactoring is mentioned in many of the other answers but, sometimes, it is not appropriate. I always find that, when refactoring, you generally need a clear path through which the old code can be gradually morphed into the new in a number of well defined steps.
When the old code is so far removed from how you want it to look, such as the extreme cases you seem to be suggesting, you could probably redesign, rewrite and test the new code in a shorter time than it would to take to refactor it.
Scrap it and start over, using the compiled legacy application as a business requirements document.
And spending time in analysis with the users to see what they want changed.
Post it to www.worsethanfailure.com!!!
If no modifications are needed, I don't touch it.
If at all possible, I write automated unit tests first, especially focused on the areas that need modification.
If automated unit tests are not possible, I do what I can to document manual unit tests.
I am just using the tests to document "current" behavior at this point.
If possible, I always keep a version of the code and executable environment that runs things the "original" way (before I touched it) so I can always add new "behavior documentation" tests and better detect regressions I may have caused later.
Once I start changing things, I want to be very careful not to introduce regressions. I do this by continually rerunning (and or adding new tests) to the tests I wrote before I started writing code.
When possible, I leave bugs as-is if there is no business need for them to be fixed. Those bugs may be "features" to some users and may have unclear side effects that wouldn't be clear until the code was redeployed to production.
As far as refactoring, I do that as aggressively as possible, but only in the code that I need to change otherwise anyway. I may refactor more aggressively in my own personal copy of the code that will never be checked in, just to improve the readability of the code for me personally. It's often times difficult to properly test changes that are only made for readability reasons, so for safety reasons, I generally don't check those changes in / deploy them unless I can confidently test that the code changes are completely safe (it's really bad to introduce bugs when you are making changes that are unnecessary for anything but readability).
Really, it's a risk management problem. Proceed with caution. The users do not care if the code is atrocious, they just care that it gets better without getting worse. Your need for beautiful code is not important in this scenario, get past it.
Just like any other code, you leave it slightly better when you leave it than it was when you entered it. You do not ever, ever rewrite the whole code. If that is the work it takes for some reason, then you start a project (small or large) for it.
I am assuming we are talking about a substantial amount of code here.
Not every day is a great day at work you know :)
The first question to ask is: does it work?
If the answer is yes, that would be a huge disincentive to simply ditch it and start over. There may be thousands of man-hours in that code which address edge cases and nasty bugs. Worse yet, there may be other modules in the system that depend on the current incorrect (but known and possibly documented) behavior. Don't mess with it if it isn't broken.
If you are keen on cleaning it up, start by writing test cases for the current behavior. When you run across an instance where the behavior differs from the specification, you must decide whether to accept the behavior as "correct" or go with what the spec say it ought to do.
Only once you have written test cases that all pass should you begin to refactor. The tests will tell you whether your efforts are breaking anything.
I'd talk to my manager and describe the code. Most managers would not want a program held together by banding wire and duct tape per se. If the code is really that bad there are sure to be some business logic errors, hardcoding etc. stuffed in there that will eventually just destroy productivity.
I've come across some pretty bad code before (single letter variable names, no comments, everything crammed onto one line, etc.) and once I mentioned/showed it to my manager they almost always said "go ahead and re-write it", because not only are you taking the hit for reading and changing the code but future co-workers will have to go through the same pain. Better that you take a longer period of time just once to rewrite it rather than having each person who touches the code in the future have to go through and comprehend and decipher it first.
There is an old saying. If is isn't broke, do not fix it. If you have to maintain it then reverse engineer it and document it so the next time you come across it you will know what it does.
You do not know the situation the developer was in when he or she wrote the code. He or she may have been under a time crunch when it was written, (management was all over the developer, etc)
There are also situations where he or she wrote the code per the spec, The spec then changed several times, the developer had to patch the code, as rewrite is out of the question due to time constraints. This happens all of the time.
If the code impacts the performance of robustness of the application and is modular then you can re factor or re-write. Document the situation to assist future programmers in understanding.
Also many programmers consider reverse engineering other developers code as beneath them.
they would rather rewrite without considering the ramifications of doing so.
If you have never done so, try it sometime, it will make you a better developer.
Thanks
Joe
Kill it with fire.
Depends on your time frame and how important that code is to you. If you have to "just make it work" then do that and rewrite the module when time allows.
If its an important or integral part of what you do then refactor refactor refactor.
Then find the guy/girl who wrote it and send them a rude postcard!
The worst offender (in my experience) of really AWFUL code is the ease with which people can do cut & paste these days. Cut & paste should be used rarely. If you think that's the right solution, it's generally better to step back and generalize the problem a little.
Anytime you see code that is "nearly incomprehensible", PROCEED WITH CAUTION. You need to assume that any major re-factoring will result in new bugs being introduced that you'll need to find and correct.
Additionally, I've seen this scenario many times (even fell victim to it myself once or twice): Programmer inherits legacy code, decides code is ancient & unmaintainable and decides to refactor it, ends up deleting key "fixes" or "business rules" subtly patched in over the years, ends up spending a lot of time tracking down and re-introducing similar code when users complain about "a problem fixed years ago is happening again".
Re-factoring (and debugging) almost always takes longer than expected and should never be considered as a "freebie" that comes along with whatever task you're supposed to be doing.
"If it ain't broke, don't 'fix' it" still has a lot of truth.
Im my company we always Refactor Mercilessly. so we still come across atrocious code but LESS and Less and less ...
We write a lot of in-house code and the company is run for about 100 years by the same family. Management usually tells us we have to maintain the code base (evolve) for another 50 years or so. In this setting having code you don't dare to touch is considered a bigger risk to the long term survival of the company then the prospect of downtime because some under-tested code broke because of refactoring.
I run copy-paste detector and findbugs on all legacy code that comes my way.
I then plan my initial refactoring:
remove unused code, unused variable and unused methods
refactor duplicated code
set up a single step build
build a basic functional test
By that point the code meets the basic minimum for maintainability. It can be easily built and basic errors can be found via an automated test.
I often add code like this:
log.debug("is foo null? " + (foo == null));
log.debug("is discount < raw price ? " + (foo.getDiscount() < foo.getRawPrice()));
Some of that code will be recovered for unit tests when I can refactor to it.
I've worked places where we ship that kind of code.
I try to make sense of it, make the necessary changes, and move on.
Of course, making sense of it usually involves some changes; at the very least, I move around the whitespace and line up corresponding braces in the same column like so:
if(condition){
doSomething(); }
// becomes...
if(condition)
{
doSomething();
}
I'll also often change variable names.
And very often, "the necessary changes" require refactoring. :)
Get the idea of what they're doing and the deadline to finish. A larger deadline, typically rebuild much of the code from the ground up, as I find it a very worthwhile experience to not only decipher terrible code and make it legible and document, but somewhere in your brain those neurons are pressed to avoid similar mistakes in the future.

Resources