What is the multiplier in Interface Builder for? - interface-builder

What does the multiplier, which is a property of a constraint in Auto Layout, do?

The relationship between two values in a constraint is determined by a formula:
b = am + c
where a and b are the two values that are to be related, m is the multiplier, and c is the constant.
So for example if one width is to be twice that of another, clearly a multiplier of 2 is going to make sense (and a constant of 0). But if one width is to be 10 more than another, then a constant of 10 is going to make sense (and a multiplier of 1).
The default, obviously, is the multiplier is 1 and the constant is 0. That makes a and b equal.
Extra for experts: Under the hood, part of the reason for the structure of this formula is that you end up with a set of simultaneous linear equations to solve for. This is how the various constraints are resolved to get the actual layout.

According to Apple's docs: "The constant multiplied with the attribute on the right-hand side of the constraint as part of getting the modified attribute."
It is useful, for instance, if you want one view's height to be 35% of another view's height. In this case, you'd create a constraint making their heights equal with a multiplier of 0.35.

Related

2D bin packing with predefined gaps in container

I have a problem with optimal placing of rectangular objects with different size and amount in rectangular container. The problem can be perfectly solved with the one of 2D bin packing algorithms but only on empty container. For me it is almost always not a case. My containers can have a restricted places where no object can be placed.
Packing example
Surely I am not the first one who encountered this kind of problem and I hope someone already developed a good solution for it. Anything is good: book references, articles, code snippets, etc.
Formal algorithms are prefered upon neural networks and this kind of things.
One possible way to solve it is with integer linear programming. There are different models but here is a simple one (with a bit of an issue, but you can improve on this if necessary).
Split the problem into a master problem and sub problems, with the master problem looking like this:
minimize sum(p)
s.t.
for all i: sum[j] p[j]*count[j,i] >= n[i]
p[i] >= 0 (and integer, don't add this constraint)
Where:
p are the decision variables, deciding how many instances to use of a particular "packing" of the available items into the container. There are obviously way too many of these to list in advance, but they can be generated dynamically.
count[j,i] is the number of times that packing j contains item i.
n[i] is the number of times we want item i.
the constraints are >= because packing a couple extra items is OK and it lets us use fewer different packings (otherwise the master problem would need special "deliberately subobtimal" packings to be able to fulfill the constraint).
the integer constraint shouldn't be added explicitly if you're using a solver, because an integer solution may need columns that were not needed yet in the fractional solution.
Start with a couple of dumb packings for each item so that there definitely is some solution, bad as it may be. You can even just place one item in the container which is trivial to do even without using the solver for the sub problem, but the sub problem has to be solved anyway so you may as well reuse it here.
The sub problem is finding out what packing can be made that would improve the current solution that the master problem has. So take the dual costs of the rows of the master problem C (there are as many as there are different kinds of item) and solve
maximize y.C
s.t.
1) for all i: y[i] <= n[i]
2) for all i: y[i] = sum[j] P[j] if j is a placement of item i
3) for all cells (a,b): sum[j] P[j] (if j covers a,b) <= 1
4) for all existing packings e: e.y <= sum(e) - 1
y >= 0 and integer
P boolean
Where,
y are implied variables that follow from a choice for P, y[i] is how many times item i occurs in the packing.
P are the real decision variables, deciding whether or not to use a particular placement of a particular item. There are 62 different item-placements in your example problem, including rotations.
constraint 1 ensures that a packing can actually be used in an integer solution to the master problem (using too many of an item would doom the corresponding variable to be fractional or zero).
constraint 2 links y to P
constraint 3 ensures that the solution is a valid packing, in the sense that items do not overlap each other.
constraint 4 is necessary to avoid re-creating a column that was already added to the master problem.
Re-creating a column wouldn't happen if the master problem was a regular linear program, but it's an integer program and after branching constraint 4 needed to explicitly prevent re-creation. For example, on the "low" branch (recall this means we took some variable k that had a fractional value f and limited it to be <= ⌊f⌋), the first thing the sub problem tries to do is re-create exactly the same packing that corresponds k, so that it can be added to the master problem to "undo the damage". That is exactly the opposite of what we need though. So re-creating a column must be banned.
Constraint 4 is not a great way to do it, because now what the sub problem will try is generating all equivalent packings, thanks to symmetries. For example, after rounding down the variable of this packing:
It generates equivalent packings like these:
etc. There are a lot of these, and they're all pointless because it doesn't matter (for the objective value of the master problem, when the integer constraint is taken into account) where the 1x3 piece goes, it just matters that the packing contains a 1x3 piece and 14 1x1 pieces.
So ideally constraint 4 would be replaced by something more complicated that bans a packing equivalent to any that have come before, but something else that mostly works is first trying the high branch. At least in this example, that works out just fine.
In this example, after adding the columns that allow the master problem to be optimal (but still fractional, before any branching), the objective value is 5.5882352941176467. That already means that we know we'll need at least 6 containers, because this being the optimal fractional value proves that it cannot be done with 5 and a fractional number of containers is not an option.
A solution with 6 containers is found quickly, namely
Three of these:
One each of these:
Which packs all the pieces plus an extra 1x4 piece and 3 extra 1x1 pieces.
This algorithm does not depend the shape of the pieces or the container much, except that they have to be expressible as cells on a grid. The container can have holes all over the place, the pieces can be more like tetris pieces or even have disconnected parts. A downside is that the list of placements that it needs scales badly with the size of the container.

Showing two images with the same colorbar in log

I have two sparse matrices "Matrix1" and "Matrix2" of the same size p x n.
By sparse matrix I mean that it contains a lot of exactly zero elements.
I want to show the two matrices under the same colormap and a unique colorbar. Doing this in MATLAB is straightforward:
bottom = min(min(min(Matrix1)),min(min(Matrix2)));
top = max(max(max(Matrix1)),max(max(Matrix2)));
subplot(1,2,1)
imagesc(Matrix1)
colormap(gray)
caxis manual
caxis([bottom top]);
subplot(1,2,2)
imagesc(Matrix2)
colormap(gray)
caxis manual
caxis([bottom top]);
colorbar;
My problem:
In fact, when I show the matrix using imagesc(Matrix), it can ignore the noises (or backgrounds) that always appear with using imagesc(10*log10(Matrix)).
That is why, I want to show the 10*log10 of the matrices. But in this case, the minimum value will be -Inf since the matrices are sparse. In this case caxis will give an error because bottom is equal to -Inf.
What do you suggest me? How can I modify the above code?
Any help will be very appreciated!
A very important point is that the minimum value in your matrix will always be 0. Leveraging this, a very simple way to address your problem is to add 1 inside the log operation so that values that map to 0 in the original matrix also map to 0 in the log operation. This avoids the -Inf error that you're encountering. In fact, this is a very common way of visualizing the Fourier Transform if you will. Adding 1 to the logarithm ensures that the transform has no negative values in the output, yet the derivative or its rate of change remains intact as the effect is simply a translation of the curve by 1 unit to the left.
Therefore, simply do imagesc(10*log10(1 + Matrix));, then the minimum is always bounded at 0 while the maximum is unbounded but subject to the largest value that is seen in Matrix.

Viewpoints Sudoku

I'm looking for alternative viewpoints to solve sudoku problems using constraint programming.
The classical viewpoint is to use a finite domain (row, column) variables who can take values from 1 to 9. This is a good viewpoint and it's easy to define constraints for it. For example: (1,2) variable with as value 4 means 4 is in row 1 in column 2.
But it's hard to come up with other viewpoints. I tried and came up with the viewpoint of a 3-dimensional matrix that takes binary values. For example variable (1,2,7) with 1 as value means there is a 7 in row 1 in column 2. But using binary values should be your go to if all other viewpoints fail to deliver good constraints.
Are there any other good viewpoints out there?
EDIT: A good viewpoint should allow the constraints to be concisely expressed. I prefer viewpoints that allow the problem to be described using as few constraints as possible, as long as those constraints have efficient algorithms.
Definition viewpoint: A viewpoint is a pair X,D, where X = {x1, . . . , xn} is a set of variables,
and D is a set of domains; for each xi ∈ X, the associated domain Di is the set of
possible values for x. It must be possible to ascribe a meaning to the variables and values
of the CSP in terms of the problem P, and so to say what an assignment in the viewpoint
X,D is intended to represent in terms of P.
The viewpoints you gave are a homomorphical mapping of the positional encoding of the relation a sudoku is build from (row,column, digit).
Another approach would be encoding the restriction sets (rows[1-9], columns[1-9], squares[ul,um,ur,ml,mm,mr,ll,lm,lr], or whatever restrictions apply) and whether a certain digit is placed in it. This likely will be
horrible with respect to defining constraints. (So I'd guess it is to be considered as not good). It requires encoding the relation among the restriction sets to be "known" separately.
E.g. a (2,5,7) in classical viewpoint would imply (row2,7),(col5,7) and (um,7) in this alternative.
As you can see, the problem is with the encoding of the relation between a logical position and the various constraints.
The classical vieport is building upon encoding the positional data and applies constraints on possible placings. (The way a sudoko is explained and approached for solving.) The alternative is using the constraint sets as viewpoints and needs to address the positioning as constraints.
Normal people might get a knot into their brains from such representation, though. (And I'd not volunteer for figuring out the constraints...)
One possible other viewpoint is the following:
Let's first associate a number to each 3x3 region (top-left is 1, top is 2, etc.) and inside each of these regions a number to all of the 9 squares in it (top-left is 1, top is 2, etc.).
X={Xi,j | i ∈ 1..9, j ∈ 1..9 } where Xi,j has domain 1..9 and designate the place of number i in region j. The region constraint is already encoded with this model, the remaining two are the row and column constraints.
Here is the line constraint:
for each number i ∈ 1..9
for each (a,b,c) ∈ {(1,2,3),(4,5,6),(7,8,9)}
(Xi,a,Xi,b,Xi,c) ∈ {(1,4,7),(1,4,8),(1,4,9),(1,5,7),(1,5,8),(1,5,9),(1,6,7),(1,6,8),(1,6,9),(2,4,7),(2,4,8),(2,4,9),(2,5,7),(2,5,8),(2,5,9),(2,6,7),(2,6,8),(2,6,9),(3,4,7),(3,4,8),(3,4,9),(3,5,7),(3,5,8),(3,5,9),(3,6,7),(3,6,8),(3,6,9)}
The column constraint is similar but with (a,b,c) ∈ {(1,4,7),(2,5,8),(3,6,9)}.
This view is region-based, but two other similar views exist based on row and columns.
Other views exist, you could for example have X={Xi,j | i ∈ 1..9, j ∈ 1..9 } ∪ {Yi,j | i ∈ 1..9, j ∈ 1..9 } where Xi,j is the row (from 1 to 3) of number i in region j and Yi,j its column. All you need to do is figure out how to express the constraints.

Which algorithm will be required to do this?

I have data of this form:
for x=1, y is one of {1,4,6,7,9,18,16,19}
for x=2, y is one of {1,5,7,4}
for x=3, y is one of {2,6,4,8,2}
....
for x=100, y is one of {2,7,89,4,5}
Only one of the values in each set is the correct value, the rest is random noise.
I know that the correct values describe a sinusoid function whose parameters are unknown. How can I find the correct combination of values, one from each set?
I am looking something like "travelling salesman"combinatorial optimization algorithm
You're trying to do curve fitting, for which there are several algorithms depending on the type of curve you want to fit your curve to (linear, polynomial, etc.). I have no idea whether there is a specific algorithm for sinusoidal curves (Fourier approximations), but my first idea would be to use a polynomial fitting algorithm with a polynomial approximation of the sine.
I wonder whether you need to do this in the course of another larger program, or whether you are trying to do this task on its own. If so, then you'd be much better off using a statistical package, my preferred one being R. It allows you to import your data and fit curves and draw graphs in just a few lines, and you could also use R in batch-mode to call it from a script or even a program (this is what I tend to do).
It depends on what you mean by "exactly", and what you know beforehand. If you know the frequency w, and that the sinusoid is unbiased, you have an equation
a cos(w * x) + b sin(w * x)
with two (x,y) points at different x values you can find a and b, and then check the generated curve against all the other points. Choose the two x values with the smallest number of y observations and try it for all the y's. If there is a bias, i.e. your equation is
a cos(w * x) + b sin(w * x) + c
You need to look at three x values.
If you do not know the frequency, you can try the same technique, unfortunately the solutions may not be unique, there may be more than one w that fits.
Edit As I understand your problem, you have a real y value for each x and a bunch of incorrect ones. You want to find the real values. The best way to do this is to fit curves through a small number of points and check to see if the curve fits some y value in the other sets.
If not all the x values have valid y values then the same technique applies, but you need to look at a much larger set of pairs, triples or quadruples (essentially every pair, triple, or quad of points with different y values)
If your problem is something else, and I suspect it is, please specify it.
Define sinusoid. Most people take that to mean a function of the form a cos(w * x) + b sin(w * x) + c. If you mean something different, specify it.
2 Specify exactly what success looks like. An example with say 10 points instead of 100 would be nice.
It is extremely unclear what this has to do with combinatorial optimization.
Sinusoidal equations are so general that if you take any random value of all y's these values can be fitted in sinusoidal function unless you give conditions eg. Frequency<100 or all parameters are integers,its not possible to diffrentiate noise and data theorotically so work on finding such conditions from your data source/experiment first.
By sinusoidal, do you mean a function that is increasing for n steps, then decreasing for n steps, etc.? If so, you you can model your data as a sequence of nodes connected by up-links and down-links. For each node (possible value of y), record the length and end-value of chains of only ascending or descending links (there will be multiple chain per node). Then you scan for consecutive runs of equal length and opposite direction, modulo some initial offset.

Puzzle: Need an example of a "complicated" equivalence relation / partitioning that disallows sorting and/or hashing

From the question "Is partitioning easier than sorting?":
Suppose I have a list of items and an
equivalence relation on them, and
comparing two items takes constant
time. I want to return a partition of
the items, e.g. a list of linked
lists, each containing all equivalent
items.
One way of doing this is to extend the
equivalence to an ordering on the
items and order them (with a sorting
algorithm); then all equivalent items
will be adjacent.
(Keep in mind the distinction between equality and equivalence.)
Clearly the equivalence relation must be considered when designing the ordering algorithm. For example, if the equivalence relation is "people born in the same year are equivalent", then sorting based on the person's name is not appropriate.
Can you suggest a datatype and equivalence relation such that it is not possible to create an ordering?
How about a datatype and equivalence relation where it is possible to create such an ordering, but it is not possible to define a hash function on the datatype that will map equivalent items to the same hash value.
(Note: it is OK if nonequivalent items map to the same hash value (collide) -- I'm not asking to solve the collision problem -- but on the other hand, hashFunc(item) { return 1; } is cheating.)
My suspicion is that for any datatype/equivalence pair where it is possible to define an ordering, it will also be possible to define a suitable hash function, and they will have similar algorithmic complexity. A counterexample to that conjecture would be enlightening!
The answer to questions 1 and 2 is no, in the following sense: given a computable equivalence relation ≡ on strings {0, 1}*, there exists a computable function f such that x ≡ y if and only if f(x) = f(y), which leads to an order/hash function. One definition of f(x) is simple, and very slow to compute: enumerate {0, 1}* in lexicographic order (ε, 0, 1, 00, 01, 10, 11, 000, …) and return the first string equivalent to x. We are guaranteed to terminate when we reach x, so this algorithm always halts.
Creating a hash function and an ordering may be expensive but will usually be possible. One trick is to represent an equivalence class by a pre-arranged member of that class, for instance, the member whose serialised representation is smallest, when considered as a bit string. When somebody hands you a member of an equivalence class, map it to this canonicalised member of that class, and then hash or compare the bit string representation of that member. See e.g. http://en.wikipedia.org/wiki/Canonical#Mathematics
Examples where this is not possible or convenient include when somebody gives you a pointer to an object that implements equals() but nothing else useful, and you do not get to break the type system to look inside the object, and when you get the results of a survey that only asks people to judge equality between objects. Also Kruskal's algorithm uses Union&Find internally to process equivalence relations, so presumbly for this particular application nothing more cost-effective has been found.
One example that seems to fit your request is an IEEE floating point type. In particular, a NaN doesn't compare as equivalent to anything else (nor even to itself) unless you take special steps to detect that it's a NaN, and always call that equivalent.
Likewise for hashing. If memory serves, any floating point number with all bits of the significand set to 0 is treated as having the value 0.0, regardless of what the bits in the exponent are set to. I could be remembering that a bit wrong, but the idea is the same in any case -- the right bit pattern in one part of the number means that it has the value 0.0, regardless of the bits in the rest. Unless your hash function takes this into account, it will produce different hash values for numbers that really compare precisely equal.
As you probably know, comparison-based sorting takes at least O(n log n) time (more formally you would say it is Omega(n log n)). If you know that there are fewer than log2(n) equivalence classes, then partitioning is faster, since you only need to check equivalence with a single member of each equivalence class to determine which part in the partition you should assign a given element to.
I.e. your algorithm could be like this:
For each x in our input set X:
For each equivalence class Y seen so far:
Choose any member y of Y.
If x is equivalent to y:
Add x to Y.
Resume the outer loop with the next x in X.
If we get to here then x is not in any of the equiv. classes seen so far.
Create a new equivalence class with x as its sole member.
If there are m equivalence classes, the inner loop runs at most m times, taking O(nm) time overall. As ShreetvatsaR observes in a comment, there can be at most n equivalence classes, so this is O(n^2). Note this works even if there is not a total ordering on X.
Theoretically, it is alway possible (for questions 1 and 2), because of the Well Ordering Theorem, even when you have an uncountable number of partitions.
Even if you restrict to computable functions, throwawayaccount's answer answers that.
You need to more precisely define your question :-)
In any case,
Practically speaking,
Consider the following:
You data type is the set of unsigned integer arrays. The ordering is lexicographic comparison.
You could consider hash(x) = x, but I suppose that is cheating too :-)
I would say (but haven't thought more about getting a hash function, so might well be wrong) that partitioning by ordering is much more practical than partitioning by hashing, as hashing itself could become impractical. (A hashing function exists, no doubt).
I believe that...
1- Can you suggest a datatype and equivalence relation such that it is
not possible to create an ordering?
...it's possible only for infinite (possibly only for non-countable) sets.
2- How about a datatype and equivalence relation where it is
possible to create such an ordering,
but it is not possible to define a
hash function on the datatype that
will map equivalent items to the same
hash value.
...same as above.
EDIT: This answer is wrong
I am not going to delete it just because some of the comments below are enlightening
Not every equivalence relationship implies an order
As your equivalence relationship should not induce an order, let´s take an un-ordered distance function as relation.
If we get the set of functions f(x):R -> R as our datatype, and define an equivalence relation as:
f is equivalent to g if f(g(x)) = g(f(x) [commuting Operators][1]
Then you can't sort on that order (no injective function exists with the Real numbers). You just can't find a function which maps your datatype to numbers due to the cardinality of the function's space.
Suppose that F(X) is a function which maps an element of some data type T to another of the same type, such that for any Y of type T, there is exactly one X of type T such that F(X)=Y. Suppose further that the function is chosen so that there is generally no practical way of finding the X in the above equation for a given Y.
Define F0=X, F{1}(X)=F(X), F{2}(X)=F(F(X)), etc. so F{n}(X) = F(F{n-1}(X)).
Now define a data type Q containing a positive integer K and an object X of type T. Define an equivalence relation thus:
Q(a,X) vs Q(b,Y):
If a > b, the items are equal iff F{a-b}(Y)==X
If a < b, the items are equal iff F{b-a}(X)==Y
If a=b, the items are equal iff X==Y
For any given object Q(a,X) there exists exactly one Z for F{a}(Z)==X. Two objects are equivalent iif they would have the same Z. One could define an ordering or hash function based upon Z. On the other hand, if F is chosen such that its inverse cannot be practically computed, the only practical way to compare elements may be to use the equivalence function above. I know of no way to define an ordering or hash function without either knowing the largest possible "a" value an item could have, or having a means to invert function F.

Resources