Recursion algorithm for multilevel report - algorithm

I'm currently developing a multi-level SSRS report, and I'm struggling with the algorithm. I've developed a recursion class which looks like below, but the level numbers are incorrect. I want the parent record (represented by a, b, and c) to show the child records so that the child records' level = (parentRecLevel+1). Right now, the level values just increment by 1. Anyone have any advice?
protected BOMLevel getBomLevelItem(str itemId, int numLevel, boolean firstRec)
while select tmpBOM
{
bomLevel = this.getBomLevelItem(bomLevel.ItemId, bomLevel.Level, false);
}
Current Outcome (where b1, c1, and c2 are children of b and c respectively):
1 a
2 b
2 b1
3 c
4 c1
5 c2
Wanted Outcome:
1 a
2 b
3 b1
2 c
3 c1
3 c2

TLDR: Do not reinvent the wheel, use existing algorithms and frameworks.
I'm assuming your question is not for a training exercise, but a real world problem. If it is an exercise, try to get a good grasp of recursion in an easy to use language of your choice with a big community before coming back to x++.
Your recursion method looks incomplete, because in each recursion, you iterate through all records of tmpBom, which (unless you modify the records in that table somewhere else) does not make sense and will never terminate. I also don't see how this method could produce the outcome you describe. I suggest you take a look at some recursion algorithm training material to learn about the fundamental parts of a recursion.
You tagged the question x++ and the syntax also looks very much like that. Unfortunately you did not add the information which version of microsoft-dynamics you are using, but I will assume dynamics-ax-2012 as it is currently the most common version in use.
In this version, there is already an out-of-the-box SSRS report that will show you the structure of a bill of material. You can call the report at Inventory management > Reports > Bills of materials > Lines. It should be fairly easy to modify this report so that it also shows the level if the report does not already fulfill your requirements.
If you still need to implement your own solution, take a look at class BOMSearch and its children. It is used in several places (check the cross references) and can also used to expand/explode a bill of material.
Also note that there are a lot of articles out there that try to explain how to expand or explode a bill of material in x++ code, but as with all things on the internet, be careful: Most of them are incomplete or plain wrong.

Related

Type of pseudo code

First of all, sorry for this stupid question. But I really need to know about the languages which are used to show execution flow of program in computer science books.
Example:
1 A = 4
2 t1 = A * B
3 L1: t2 = t1 / C
4 if t2 < W goto L2
5 M = t1 * k
6 t3 = M + I
7 L2: H = I
8 M = t3 - H
9 if t3 ≥ 0 goto L3
10 goto L1
11 L3: halt
Does this language have some specific standers? Is this a pseudo code or an intermediate form of code?
There are no technical rules for Pseudocode, unless you are attempting to conform to standards/syntax for a particular language.
Pseudocode is meant to be human readable and still convey the flow and meaning of the code.
Books that use Pseudocode typically conform to a Java, C, or Pascal-type (among others) structure to make the code easy to read for those familiar with the languages.
The naming conventions that I have seen in the past usually lean toward C or Java-esque naming conventions.
You can find more information here: http://en.wikipedia.org/wiki/Pseudocode
The purpose of pseudocode is to describe an algorithm in a manner which is readable and unambiguous. (Different authors place different amount of emphasis on those two goals, which are frequently in opposition.)
Pseudocode does not need to look like english (or another spoken/written language), nor does it need to look like a real programming language. Ideally its constructs should be familiar to programmers of many different languages.
That pseudocode fills that requirement fairly well... I don't see anything in it which I can't readily understand the effect of.

Maximum difference between columns using relational algebra

Is it possible to obtain the maximum difference between two columns (for example starting and ending weights)?
Right now I'm leaning towards no as this would require a new column with the difference between the two columns for each row, then taking the max of that. Doing it the way I orginally intended doesn't work either since arithmetic operations are not allowed in the conditions of select operations (e.g. SIGMA (c1 - c2 < c3 - c4)(Table) is not allowed).
Disclosure: this is part of a homework question.
It can be done, exactly in the way you planned, but you need generalized projection for that. The generalized projection is the operator
Π(E1, E2,..., En)R
where R is a relation, and E1...En are expressions in the form a⊕b, where a and b are attributes of R or constants, and ⊕ is an arbitrary binary operator between them. The result is a relation with attributes E1...En.
This would allow you to project the differences into a new relation (R' := Π(x-y)R), then find the maximum on that, just as you planned.
If we're not allowed to use generalized projection, then I think we have no means to actually subtract an attribute from another, or to actually calculate anything from them, as the definition of projection allow only attribute names, and the definition of selection allow only expressions of the form aθb where a and b are attributes or constants and θ is a binary relational operator (this is logical, in its way, because if we have a relation R(X,Y), then we have no idea about the type of X or Y, making operations on them quite meaningless).
I think generalized projection is a great extension to relational algebra. It's obviously immensely useful in real life, and it can be defended even from a more scientific point of view: if we allow binary conditional operators on the values like "X > 50", then we made assumptions on the type already, rendering that point kind of moot. Your instructor may disagree, though.
If you're looking to do this in the real world, you should be able to do this with a subquery (or a view, which amounts to much the same thing), something like:
select max (diff) from (
select high - low as diff from blah blah blah
)
Whether this applies to the abstract world of relational algebra, I couldn't say. I'm too busy fixing those damn real-world problems :-)

Prolog Beginner - Is This a Bad Idea?

The application I'm working on is a "configurator" of sorts. It's written in C# and I even wrote a rules engine to go with it. The idea is that there are a bunch of propositional logic statements, and the user can make selections. Based on what they've selected, some other items become required or completely unavailable.
The propositional logic statements generally take the following forms:
A => ~X
ABC => ~(X+Y)
A+B => Q
A(~(B+C)) => ~Q A <=> B
The symbols:
=> -- Implication
<=> -- Material Equivalence
~ -- Not
+ -- Or
Two letters side-by-side -- And
I'm very new to Prolog, but it seems like it might be able to handle all of the "rules processing" for me, allowing me to get out of my current rules engine (it works, but it's not as fast or easy to maintain as I would like).
In addition, all of the available options fall in a hierarchy. For instance:
Outside
Color
Red
Blue
Green
Material
Wood
Metal
If an item at the second level (feature, such as Color) is implied, then an item at the third level (option, such as Red) must be selected. Similarly if we know that a feature is false, then all of the options under it are also false.
The catch is that every product has it's own set of rules. Is it a reasonable approach to set up a knowledge base containing these operators as predicates, then at runtime start buliding all of the rules for the product?
The way I imagine it might work would be to set up the idea of components, features, and options. Then set up the relationships between then (for instance, if the feature is false, then all of its options are false). At runtime, add the product's specific rules. Then pass all of the user's selections to a function, retrieving as output which items are true and which items are false.
I don't know all the implications of what I'm asking about, as I'm just getting into Prolog, but I'm trying to avoid going down a bad path and wasting lots of time in the process.
Some questions that might help target what I'm trying to find out:
Does this sound do-able?
Am I barking up the wrong tree?
Are there any drawbacks or concerns to trying to create all of these rules at runtime?
Is there a better system for this kind of thing out there that I might be able to squeeze into a C# app (Silverlight, to be exact)?
Are there other competing systems that I should examine?
Do you have any general advice about this sort of thing?
Thanks in advance for your advice!
Sure, but Prolog has a learning curve.
Rule-based inference is Prolog's game, though you may have to rewrite many rules into Horn clauses. A+B => Q is doable (it becomes q :- a. q :- b. or q :- (a;b).) but your other examples must be rewritten, including A => ~X.
Depends on your Prolog compiler, specifically whether it supports indexing for dynamic predicates.
Search around for terms like "forward checking", "inference engine" and "business rules". Various communities keep inventing different terminologies for this problem.
Constraint Handling Rules (CHR) is a logic programming language, implemented as a Prolog extension, that is much closer to rule-based inference/forward chaining/business rules engines. If you want to use it, you'll still have to learn basic Prolog, though.
Keep in mind that Prolog is a programming language, not a silver bullet for logical inference. It cuts some corners of first-order logic to keep things efficiently computable. This is why it only handles Horn clauses: they can be mapped one-to-one with procedures/subroutines.
You can also throw in DCGs to generate bill of materials. The idea is
roughly that terminals can be used to indicate subproducts, and
non-terminals to define more and more complex combinations of a subproducts
until you arrive at your final configurable products.
Take for example the two attribute value pairs Color in {red, blue, green}
and Material in {wood, metal}. These could specify a door knob, whereby
not all combinations are possible:
knob(red,wood) --> ['100101'].
knob(red,metal) --> ['100102'].
knob(blue,metal) --> ['100202'].
You could then define a door as:
door ... --> knob ..., panel ...
Interestingly you will not see any logic formula in such a product specification,
only facts and rules, and a lot of parameters passed around. You can use the
parameters in a knowledge acquisition component. By just running uninstantiated
goals you can derive possible values for the attribute value pairs. The predicate
setof/3 will sort and removen duplicates for you:
?- setof(Color,Material^Bill^knob(Color,Material,Bill,[]),Values).
Value = [blue, red]
?- setof(Material,Color^Bill^knob(Color,Material,Bill,[]),Values).
Material = [metal, wood]
Now you know the range of the attributes and you can let the end-user successively
pick an attribute and a value. Assume he takes the attribute Color and its value blue.
The range of the attribute Material then shrinks accordingly:
?- setof(Material,Bill^knob(blue,Material,Bill,[]),Values).
Material = [metal]
In the end when all attributes have been specified you can read off the article
numbers of the subproducts. You can use this for price calculation, by adding some
facts that give you additional information on the article numbers, or to generate
ordering lists etc..:
?- knob(blue,metal,Bill,[]).
Bill = ['100202']
Best Regards
P.S.:
Oh it seems that the bill of materials idea used in the product configurator
goes back to Clocksin & Mellish. At least I find a corresponding
comment here:
http://www.amzi.com/manuals/amzi/pro/ref_dcg.htm#DCGBillMaterials

2D PHP Pattern Creator Altgorithm

I'm looking for an algorithm to help me build 2D patterns based on rules. The idea is that I could write a script using a given site of parameters, and it would return a random, 2-dimensional sequence up to a given length.
My plan is to use this to generate image patterns based on rules. Things like image fractals or sprites for game levels could possibly use this.
For example, lets say that you can use A, B, C, & D to create the pattern. The rule is that C and A can never be next to each other, and that D always follows C. Next, lets say I want a pattern of size 4x4. The result might be the following which respects all the rules.
A B C D
B B B B
C D B B
C D C D
Are there any existing libraries that can do calculations like this? Are there any mathematical formulas I can read-up on?
While pretty inefficient concering runtime, backtracking is an often used algorithm for such a problem.
It follows a simple pattern, and if written correctly, you can easily replace a rule set into it.
Define your rule data structures; i.e., define the set of operations that the rules can encapsulate, and define the available cross-referencing that can be done. Once you've done this, you should have a clearer view of what type of algorithms to use to apply these rules to a potential result set.
Supposing that your rules are restricted to "type X is allowed to have type Y immediately to its left/right/top/bottom" you potentially have situations where generating possible patterns is computationally difficult. Take a look at Wang Tiles (a good source is the book Tilings and Patterns by Grunbaum and Shephard) and you'll see that with the states sets of rules you might define sets of Wang Tiles. Appropriate sets of these are Turing Complete.
For small rectangles, or your sets of rules, this may only be of academic interest. As mentioned elsewhere a backtracking approach might be appropriate for your ruleset - in which case you may want to consider appropriate heuristics for the order in which new components are added to your grid. Again, depending on your rulesets, other approaches might work. E.g. if your ruleset admits many solutions you might get a long way by randomly allocating many items to the grid before attempting to fill in remaining gaps.

Aggregating automatically-generated feature vectors

I've got a classification system, which I will unfortunately need to be vague about for work reasons. Say we have 5 features to consider, it is basically a set of rules:
A B C D E Result
1 2 b 5 3 X
1 2 c 5 4 X
1 2 e 5 2 X
We take a subject and get its values for A-E, then try matching the rules in sequence. If one matches we return the first result.
C is a discrete value, which could be any of a-e. The rest are just integers.
The ruleset has been automatically generated from our old system and has an extremely large number of rules (~25 million). The old rules were if statements, e.g.
result("X") if $A >= 1 && $A <= 10 && $C eq 'A';
As you can see, the old rules often do not even use some features, or accept ranges. Some are more annoying:
result("Y") if ($A == 1 && $B == 2) || ($A == 2 && $B == 4);
The ruleset needs to be much smaller as it has to be human maintained, so I'd like to shrink rule sets so that the first example would become:
A B C D E Result
1 2 bce 5 2-4 X
The upshot is that we can split the ruleset by the Result column and shrink each independently. However, I cannot think of an easy way to identify and shrink down the ruleset. I've tried clustering algorithms but they choke because some of the data is discrete, and treating it as continuous is imperfect. Another example:
A B C Result
1 2 a X
1 2 b X
(repeat a few hundred times)
2 4 a X
2 4 b X
(ditto)
In an ideal world, this would be two rules:
A B C Result
1 2 * X
2 4 * X
That is: not only would the algorithm identify the relationship between A and B, but would also deduce that C is noise (not important for the rule)
Does anyone have an idea of how to go about this problem? Any language or library is fair game, as I expect this to be a mostly one-off process. Thanks in advance.
Check out the Weka machine learning lib for Java. The API is a little bit crufty but it's very useful. Overall, what you seem to want is an off-the-shelf machine learning algorithm, which is exactly what Weka contains. You're apparently looking for something relatively easy to interpret (you mention that you want it to deduce the relationship between A and B and to tell you that C is just noise.) You could try a decision tree, such as J48, as these are usually easy to visualize/interpret.
Twenty-five million rules? How many features? How many values per feature? Is it possible to iterate through all combinations in practical time? If you can, you could begin by separating the rules into groups by result.
Then, for each result, do the following. Considering each feature as a dimension, and the allowed values for a feature as the metric along that dimension, construct a huge Karnaugh map representing the entire rule set.
The map has two uses. One: research automated methods for the Quine-McCluskey algorithm. A lot of work has been done in this area. There are even a few programs available, although probably none of them will deal with a Karnaugh map of the size you're going to make.
Two: when you have created your final reduced rule set, iterate over all combinations of all values for all features again, and construct another Karnaugh map using the reduced rule set. If the maps match, your rule sets are equivalent.
-Al.
You could try a neural network approach, trained via backpropagation, assuming you have or can randomly generate (based on the old ruleset) a large set of data that hit all your classes. Using a hidden layer of appropriate size will allow you to approximate arbitrary discriminant functions in your feature space. This is more or less the same idea as clustering, but due to the training paradigm should have no issue with your discrete inputs.
This may, however, be a little too "black box" for your case, particularly if you have zero tolerance for false positives and negatives (although, it being a one-off process, you get an arbitrary degree of confidence by checking a gargantuan validation set).

Resources