I have recently started working with PDDL, specifically using PDDL and Fast-downward to solve problems.
However, there is a problem that I encountered while working with PDDL. Defining a domain and a problem that is solvable is not challenging, the big issue is the running time of the solver.
A problem that can be solved within seconds with a well-defined domain is solved within minutes with a poorly defined domain.
I want to ask about which criteria I should look into when defining a PDDL domain. (Ex: fewer predicates, minimal use of "forall" or "exists", etc...)
I have been looking around on the internet about optimizing the domain definition for PDDL, however there has been no answers.
It's kind of an impossible question to answer crisply. General rules of thumb:
use the simplest formalism you can get away with (strips being the simplest)
avoid actions with many parameters
avoid having actions with huge conditions or effects
avoid predicates with too many parameters
Beyond that, it really depends on what's happening with the solving process to diagnose what you could change.
Related
At work, we use FSMs. Recently, I had to design an FSM for a problem that I deem "a little too complex for a simple FSM". Why? Because the problem has about 6 different data dimensions, and many permutations of this data impact the behaviour of the solution significantly. My brain thinks "6 data attributes means 2^6 +1 permutations of this data" if it were all boolean data. Furthermore, there are about 8 inputs that can happen at any given time.
This problem made me aware that my FSM creating skills stop at simple problems used in my hobby projects. At work, we are constrained to use FSMs. That means, I cannot just say "this problem is outside of the scope of FSMs. I'll use something else." Indeed, the FSM platform we have in place does provide a lot of power for our solutions.
Question: What is an approach for designing an FSM when the problem is sufficiently complex? I've researched a bit on this and found a few papers which, honestly, didn't help me much. I hope there are some best practices for this, and all I'm asking for is one. Please and thanks.
I suppose that you might be experiencing the usual "state-transition explosion", which is the known problem of traditional "flat" FSMs. The traditional FSMs "explode", because they inflict repetitions of the same reactions in many states. FSMs lack any mechanisms to capture commonalities of behavior among states. The long know solution is to use Hierarchical State Machines (a.k.a. Harel statecharts or UML state machines). HSM support the concept of state nesting, in which sub-states inherit behavior from the surrounding superstate(s). When used correctly, state nesting eliminates the repetitions and counteracts the "explosion" problem. Most non-trivial problems are not really tractable with FSMs, but are quite manageable with HSMs.
I am looking to implement an automated way of allocating processes to a variety of servers available. There are many types of servers (characterized by things like location, cpu, network card, etc..) and there are various types of processes (more than there are servers) with different priorities and location/hardware requirements. I can think of pretty much greedy algorithms that are simplistic in nature but was wondering what other references and approaches exist for this type of problem (which I feel is pretty standard). I am also interested in solving a related problem - in which say we remove one of the servers after things have been allocated and we need to reshuffle with minimal interference. This latter one I also feel is standard but I'm not sure what some good references to look at are. Any suggestions on where to start?
Your question is pretty vague. Normally problems like this are handled either by modeling them as a set of linear equations and optimizing an objective function given the linear constraints, or the problem is modeled as a knapsack problem.
Good programmers who write programs of moderate to higher difficulty in competitions of TopCoder or ACM ICPC, have to ensure the correctness of their algorithm before submission.
Although they are provided with some sample test cases to ensure the correct output, but how does it guarantees that program will behave correctly? They can write some test cases of their own but it won't be possible in all cases to know the correct answer through manual calculation. How do they do it?
Update: As it seems, it is not quite possible to analyze and guarantee the outcome of an algorithm given tight constraints of a competitive environment. However, if there are any manual, more common traits which are adopted while solving such problems - should be enough to answer the question. Something like best practices..
In competitions, the top programmers have enough experience to read the question, and think of some test cases that should catch most of the possibilities for input.
It catches most of the bugs usually - but it is NOT 100% safe.
However, in real life critical applications (critical systems on air planes or nuclear reactors for example) there are methods to PROVE some piece of code does what it is supposed to do.
This is the field of formal verification - which is way too complex and time consuming to be done during a contest, but for some systems it is used because mistakes could not be tolerated.
Some additional information:
Formal verification basically consists of 2 parts:
Manual verification - in here we use proving systems such as Hoare logic and manually prove the program does what we wants it to do.
Automatic model checking - modeling the problem as state machine, and use Model Checking tools to verify that the module does what it is supposed to do (or not doing something "bad").
Specifying "what it should do" is usually done with temporal logic.
This is often used to verify correctness of hardware models as well. For example Intel uses it to ensure they won't get the floating point bug again.
Picture this, imagine you are a top programmer.Meaning you know a bunch of algorithms and wouldn't think think twice while implementing them.You know how to modify an already known algorithm to suit the problem's needs.You are strong with estimating time and complexity and you expect that in the worst case your tailored algorithm would run within time and memory constraints.
At this level you simply think and use a scratchpad for about five to ten minutes and have a super clear algorithm before you start to code.Once you finish coding, you hit compile and there is usually no compilation error.Because the code is so intuitive to you.
Then based on the algorithm used and data structures used, you expect that there might be
one of the following issues.
a corner case
an overflow problem
A corner case is basically like you have coded for the general case, however when say N=1, the answer is different from others.So you generally write it as a special case.
An overflow is when intermediate values or results overflow a data type's limits.
You make note of any problems which arise at this point, and use this data during Challenge phase(as in TopCoder).
Once you have checked against these two, you hit Submit.
There's a time element to Top Coder, so it's not possible to test every combination within that constraint. They probably do the best they can and rely on experience for the rest, just as one does in real life. I don't know that it's ever possible to guarantee that a significant piece of code is error free forever.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
... and how to prove to management that use-cases can be informal and still useful?
Hi folks,
I came in the middle of a project and found out that there are no use-cases, user-stories, requirements, neither anything similar to a specification. Since the deadlines are short, the current dev team don't want to spend time on such things. I wanted to join that project, but by digging more I found out that the current development adds features just by considering their "wow-effect" and chooses what to add just by using the easiness that the underlying technology provides. I was surprised how they have managed to go so far (more than 4 months) without requirements, but this is what we have now. I believe that the way they have chosen is the most sure one to kill the product which has a good marketing value.
Am I right, and what would you do in a similar circumstances to prove the dev team/management to make use-cases/requirements before moving forward? Thanks in advance, kh.
P.S. Two copies of Cockburn's book are on the bookshelf...
You should give your colleagues the use-case spiel :D Tell them that use-cases are useful as they're:
A way of capturing business processes in a manner which is reasonably comprehensible by all stakeholders. This helps to bridge the gap between programmers, clients and users.
Traceable units of functionality. Use-cases are formed (ideally) in the analysis phase, referenced in the design phase, and can be used as sources for test cases later on.
Quick and easy to write up and useful, even if informal.
If you need more ammunition, you might want to read Use cases - Yesterday, today and tomorrow by none other than Ivar Jacobson.
If your colleagues still can't see the potential usefulness of use cases as a business analysis tool, then they're probably beyond help :P You should remind them that they're developing software to meet other people's needs and solve their problems in the long term, not to ostentatiously impress them in the short term with petty gimmicks. And so a little bit of direction and specification helps. Even if the use-cases themselves don't prove to be that useful, the simple act of coming up with them will force your colleagues to consider the actual underlying purpose of the software.
Ask questions, of both sides. Of development, ask them if they are certain that all of the ways in which they have considered using the application are all of the ways in which the end-users will want to use it; if they say they have, ask for proof. Of management, ask if they've ever used software that does everything they want, but still ends up being hard to use (they will have). These questions will seed the concept that what will be delivered might not be what is desired, on both sides; use that seed of an idea, then, to open up discussions (not documents, not at the start) on how the software will be used, and in what way any differences can be resolved. They'll get around to use-case documents eventually.
I am a product manager by profession, and my first reaction to your post is that ideas can come from anywhere, and if the dev team has decent ideas they should be incorporated into the product.
Having said that, a product can not develop a soul (a simple message) through a string of disconnected ideas that do not serve the ultimate purpose: solving the needs of a target user. And, ultimately it boils down to making the case that time is better spent on requirements/use cases that make sense for the product, while the opportunity cost of not having a clear strategy/end goal will lead to too many chefs and a jaded product message.
The ultimate way to make this message hit home is to involve other stake holders and have development demonstrate their work. Eventually, there will be disagreement and a more formalized (less cowboy) approach will lead to a more refined and simple product.
One of the problems you mention is tight schedule and scope creep induced by the devs themselves. Explain them, that by using use cases you can earn time by dropping features, which will potentially end up on the "never used" pile. With use cases you can find out what are the features customers need and will pay for and by removing unimportant features out of the scope you would have time to implement. Use cases apart from defining the scope also help to identify all the stakeholders, which might help you to focus even better while defining the scope and prevent forgetting about trivial things, which are not so apparent, but are a must if the product should be usable. The third most important thing about use cases is that they allow you to start thinking about corner cases which might be important for the customer before development and therefore you can find out with the customer what would be the ideal solution instead of letting the coder decide on his/her own under pressure of deadline.
Just show them.
Example is not the best way of educating people, it is the only one.
Lead by example focusing on extensions and exceptions. In other words emphasize the failure scenarios because everyone knows how the system should work. The real value of written Use Cases is identifying what should happen when something goes wrong.
That noted, consider you may have to live without written use cases. And, for the environment you describe, a major win is any sort of requirements documentation. Screen comps and/or prototyping are often easier to introduce.
What's your standard way of debugging a problem? This might seem like a pretty broad question with some of you replying 'It depends on the problem' but I think a lot of us debug by instinct and haven't actually tried wording our process. That's why we say 'it depends'.
I was sort of forced to word my process recently because a few developers and I were working an the same problem and we were debugging it in totally different ways. I wanted them to understand what I was trying to do and vice versa.
After some reflection I realized that my way of debugging is actually quite monotonous. I'll first try to be able to reliably replicate the problem (especially on my local machine). Then through a series of elimination (and this is where I think it's problem dependent) try to identify the problem.
The other guys were trying to do it in a totally different way.
So, just wondering what has been working for you guys out there? And what would you say your process is for debugging if you had to formalize it in words?
BTW, we still haven't found out our problem =)
My approach varies based on my familiarity with the system at hand. Typically I do something like:
Replicate the failure, if at all possible.
Examine the fail state to determine the immediate cause of the failure.
If I'm familiar with the system, I may have a good guess about to root cause. If not, I start to mechanically trace the data back through the software while challenging basic assumptions made by the software.
If the problem seems to have a consistent trigger, I may manually walk forward through the code with a debugger while challenging implicit assumptions that the code makes.
Tracing the root cause is, of course, where things can get hairy. This is where having a dump (or better, a live, broken process) can be truly invaluable.
I think that the key point in my debugging process is challenging pre-conceptions and assumptions. The number of times I've found a bug in that component that I or a colleague would swear is working fine is massive.
I've been told by my more intuitive friends and colleagues that I'm quite pedantic when they watch me debug or ask me to help them figure something out. :)
Consider getting hold of the book "Debugging" by David J Agans. The subtitle is "The 9 Indispensable Rules for Finding Even the Most Elusive Software and Hardware Problems". His list of debugging rules — available in a poster form at the web site (and there's a link for the book, too) is:
Understand the system
Make it fail
Quit thinking and look
Divide and conquer
Change one thing at a time
Keep an audit trail
Check the plug
Get a fresh view
If you didn't fix it, it ain't fixed
The last point is particularly relevant in the software industry.
I picked those on the web or some book which I can't recall (it may have been CodingHorror ...)
Debugging 101:
Reproduce
Progressively Narrow Scope
Avoid Debuggers
Change Only One Thing At a Time
Psychological Methods:
Rubber-duck debugging
Don't Speculate
Don't be too Quick to Blame the Tools
Understand Both Problem and Solution
Take a Break
Consider Multiple Causes
Bug Prevention Methods:
Monitor Your Own Fault Injection Habits
Introduce Debugging Aids Early
Loose Coupling and Information Hiding
Write a Regression Test to Prevent Re occurrence
Technical Methods:
Inert Trace Statements
Consult the Log Files of Third Party Products
Search the web for the Stack Trace
Introduce Design By Contract
Wipe the Slate Clean
Intermittent Bugs
Explot Localility
Introduce Dummy Implementations and Subclasses
Recompile / Relink
Probe Boundary Conditions and Special Cases
Check Version Dependencies (third party)
Check Code that Has Changed Recently
Don't Trust the Error Message
Graphics Bugs
When I'm up against a bug that I can't get seem to figure out, I like to make a model of the problem. Make a copy of the section of problem code, and start removing features from it, one at a time. Run a unit test against the code after every removal. Through this process your will either remove the feature with the bug (and hence, locate the bug), or you will have isolated the bug down to a core piece of code that contains the essence of the problem. And once you figure out the essence of the problem, its a lot easier to fix.
I normally start off by forming an hypothesis based on the information I have at hand. Once this is done, I work to prove it to be correct. If it proves to be wrong, I start off with a different hypothesis.
Most of the Multithreaded synchronization issues get solved very easily with this approach.
Also you need to have a good understanding of the debugger you are using and its features. I work on Windows applications and have found windbg to be extremely helpful in finding bugs.
Reducing the bug to its simplest form often leads to greater understanding of the issue as well adding the benefit of being able to involve others if necessary.
Setting up a quick reproduction scenario to allow for efficient use of your time to test any hypothosis you chose.
Creating tools to dump the environment quickly for comparisons.
Creating and reproducing the bug with logging turned onto the maximum level.
Examining the system logs for anything alarming.
Looking at file dates and timestamps to get a feeling if the problem could be a recent introduction.
Looking through the source repository for recent activity in the relevant modules.
Apply deductive reasoning and apply the Ockham's Razor principles.
Be willing to step back and take a break from the problem.
I'm also a big fan of using process of elimination. Ruling out variables tremendously simplifies the debugging task. It's often the very first thing that should to be done.
Another really effective technique is to roll back to your last working version if possible and try again. This can be extremely powerful because it gives you solid footing to proceed more carefully. A variation on this is to get the code to a point where it is working, with less functionality, than not working with more functionality.
Of course, it's very important to not just try things. This increases your despair because it never works. I'd rather make 50 runs to gather information about the bug rather take a wild swing and hope it works.
I find the best time to "debug" is while you're writing the code. In other words, be defensive. Check return values, liberally use assert, use some kind of reliable logging mechanism and log everything.
To more directly answer the question, the most efficient way for me to debug problems is to read code. Having a log helps you find the relevant code to read quickly. No logging? Spend the time putting it in. It may not seem like you're finding the bug, and you may not be. The logging might help you find another bug though, and eventually once you've gone through enough code, you'll find it....faster than setting up debuggers and trying to reproduce the problem, single stepping, etc.
While debugging I try to think of what the possible problems could be. I've come up with a fairly arbitrary classification system, but it works for me: all bugs fall into one of four categories. Keep in mind here that I'm talking about runtime problems, not compiler or linker errors. The four categories are:
dynamic memory allocation
stack overflow
uninitialized variable
logic bug
These categories have been most useful to me with C and C++, but I expect they apply pretty well elsewhere. The logic bug category is a big one (e.g. putting a < b when the correct thing was a <= b), and can include things like failing to synchronize access among threads.
Knowing what I'm looking for (one of these four things) helps a lot in finding it. Finding bugs always seems to be much harder than fixing them.
The actual mechanics for debugging are most often:
do I have an automated test that demonstrates the problem?
if not, add a test that fails
change the code so the test passes
make sure all the other tests still pass
check in the change
No automated testing in your environment? No time like the present to set it up. Too hard to organize things so you can test individual pieces of your program? Take the time to make it so. May make it take "too long" to fix this particular bug, but the sooner you start, the faster everything else'll go. Again, you might not fix the particular bug you're looking for but I bet you find and fix others along the way.
My method of debugging is different, probably because I am still beginner.
When I encounter logical bug I seem to end up adding more variables to see which values go where and then I go and debug line by line in the piece of code that causing a problem.
Replicating the problem and generating a repeatable test data set is definitely the first and most important step to debugging.
If I can identify a repeatable bug, I'll typically try and isolate the components involved until I locate the problem. Frequently I'll spend a little time ruling out cases so I can state definitively: The problem is not in component X (or process Y, etc.).
First I try to replicate the error, without being able to replicate the error it is basically impossible in a non-trivial program to guess the problem.
Then if possible, break out the code in a separate standalone project. There are several reasons for this: If the original project is big it quite difficult to debug second it eliminates or highlights any assumptions about the code.
I normally always have another copy of VS open which I use for the debugging parts in mini projects and to test routines which I later add to the main project.
Once having reproduced the error in the separate module the battle is almost won.
Sometimes it is not easy to break out a piece of code so in those cases I use different methods depending on how complex the issue is. In most cases assumptions about data seem to come and bite me so I try to add lots of asserts in the code in order make sure my assumptions are correct. I also disabling code by using #ifdef until the error disappears. Eliminating dependencies to other modules etc... sort of slowly circling in the bug like a vulture ..
I think I don't have really a conscious way of doing it, it varies quite a lot but the general principle is to eliminate the noise around the issue until it is quite obvious what it is. Hope I didn't sound too confusing :)