tower of boxes (stacking cubes) - algorithm

i got this task last week but can't find a good algorithm to solve the problem. So here is the description:
You can build a stable tower with building cubes by not putting bigger cubes to smaller ones and if you don't put harder cubes into lighter ones. Make a programme which gives you the highest possible tower with n number of cubes.
Input:
In the first row of in.txt there is the number of cubes n (1 =< n =<1000). the following n line consisting two positive integer, a cube's sidelength and weight (both of them are not higher than 2000) there are no similar cubes which sidelength and wieght is the same
Output:
you have to write the highest possible stable tower's m number into out.txt. into the second row you have to write in the ordinal number m of the tower from bottom to top. by the height of the tower we mean the amount of all of the cubes's sidelength (not the number of cubes). if there are more than one solution, you can give whatever you want
example for input and output:
input:
5
10 3
20 5
15 6
15 1
10 2
and the output:
3
2 1 5
here are limits on the program. Time limit: 0.2 sec. Memory limit: 16 Mb
I hope you can help me to solve this. thx for all help

The relationship "block A can be placed on top of block B" defines a partial order on the blocks. You can use Kahn's algorithm (aka "topological sort") to turn this into a total order, which you can then traverse in depth order to find the longest path.

Related

Concentric spheres

Two different lists having radii of upper hemisphere and lower hemisphere is provided. The first list consists of N upper hemispheres indexed 1 to N and the second has M lower hemispheres indexed 1 to M. A sphere of radius of R can be made taking one upper half of the radius R and one lower half of the radius R. Also, you can put a sphere into a bigger one and create a sequence of nested concentric spheres. But you can't put two or more spheres directly into another one.
If there is a sequence of (D+1) nested spheres, we can call this sequence as a D-sequence.
Find out how many different X-sequence are possible (1 <= X <= C). An X sequence is different from another if the index of any of the hemisphere used in one X-sequence is different from the other.
INPUT
The first line contains a three integers: N denoting the number of upper sphere halves, M denoting the number of lower sphere halves and C.
The second line contains N space-separated integers denoting the radii of upper hemispheres.
The third line contains M space-separated integers denoting the radii of lower hemispheres.
OUTPUT
Output a single line containing C space-separated integers , the number of ways there are to build i-sequence in modulo 1000000007.
Example
Input
3 4 3
1 2 3
1 1 3 2
Output
5 2 0
I am looking for those elements which are part of both the lists of upper as well as lower hemispheres, so that they can form a sphere and then taking their maximum count by comparing their counts in both radii lists.
And, So, for different C sum of products of counts of C+1 elements yields the answer.
How to calculate the above efficiently or is there any other approach ??
Guys this is my first answer. Spare me the whip for now as i am here to learn.
You first find the numbers of spheres possible for each radii.
no of spheres: 2 1 1
Having Radii: 1 2 3
Now since we can fit a sphere with radius r inside a sphere with radii R such that R>r, all we need to do is to find the no . of increasing subsequences of length 2,3,...till c in the list of all possible spheres formed.
List of possible spheres:[1,1*,2,3](* used for marking)
consider D1: it has 2 spheres. Try finding the no. of increasing subsequences of length 2 in the above list.
They are:
[1,2],[1*,2][1,3][1*,3][2,3]
hence the ans is 5.
Get it??
Now how to solve:
It can be done by using Dp. Naive solution has complexity .O(n^2*constant).
You may follow along the lines as provided in the following link :Dp solution.
It is worth mentioning that faster methods do exist which use BIT , segment trees etc.
It is similar to this SPOJ problem.

Touching segments

Can anyone please suggest me algorithm for this.
You are given starting and the ending points of N segments over the x-axis.
How many of these segments can be touched, even on their edges, by exactly two lines perpendicular to them?
Sample Input :
3
5
2 3
1 3
1 5
3 4
4 5
5
1 2
1 3
2 3
1 4
1 5
3
1 2
3 4
5 6
Sample Output :
Case 1: 5
Case 2: 5
Case 3: 2
Explanation :
Case 1: We will draw two lines (parallel to Y-axis) crossing X-axis at point 2 and 4. These two lines will touch all the five segments.
Case 2: We can touch all the points even with one line crossing X-axis at 2.
Case 3: It is not possible to touch more than two points in this case.
Constraints:
1 ≤ N ≤ 10^5
0 ≤ a < b ≤ 10^9
Let assume that we have a data structure that supports the following operations efficiently:
Add a segment.
Delete a segment.
Return the maximum number of segments that cover one point(that is, the "best" point).
If have such a structure, we can get use the initial problem efficiently in the following manner:
Let's create an array of events(one event for the start of each segment and one for the end) and sort by the x-coordinate.
Add all segments to the magical data structure.
Iterate over all events and do the following: when a segment start, add one to the number of currently covered segments and remove it from that data structure. When a segment ends, subtract one from the number of currently covered segment and add this segment to the magical data structure. After each event, update the answer with the value of the number of currently covered segments(it shows how many segments are covered by the point which corresponds to the current event) plus the maximum returned by the data structure described above(it shows how we can choose another point in the best possible way).
If this data structure can perform all given operations in O(log n), then we have an O(n log n) solution(we sort the events and make one pass over the sorted array making a constant number of queries to this data structure for each event).
So how can we implement this data structure? Well, a segment tree works fine here. Adding a segment is adding one to a specific range. Removing a segment is subtracting one from all elements in a specific range. Get ting the maximum is just a standard maximum operation on a segment tree. So we need a segment tree that supports two operations: add a constant to a range and get maximum for the entire tree. It can be done in O(log n) time per query.
One more note: a standard segment tree requires coordinates to be small. We may assume that they never exceed 2 * n(if it is not the case, we can compress them).
An O(N*max(logN, M)) solution, where M is the medium segment size, implemented in Common Lisp: touching-segments.lisp.
The idea is to first calculate from left to right at every interesting point the number of segments that would be touched by a line there (open-left-to-right on the lisp code). Cost: O(NlogN)
Then, from right to left it calculates, again at every interesting point P, the best location for a line considering segments fully to the right of P (open-right-to-left on the lisp code). Cost O(N*max(logN, M))
Then it is just a matter of looking for the point where the sum of both values tops. Cost O(N).
The code is barely tested and may contain bugs. Also, I have not bothered to handle edge cases as when the number of segments is zero.
The problem can be solved in O(Nlog(N)) time per test case.
Observe that there is an optimal placement of two vertical lines each of which go through some segment endpoints
Compress segments' coordinates. More info at What is coordinate compression?
Build a sorted set of segment endpoints X
Sort segments [a_i,b_i] by a_i
Let Q be a priority queue which stores right endpoints of segments processed so far
Let T be a max interval tree built over x-coordinates. Some useful reading atWhat are some sources (books, etc.) from where I can learn about Interval, Segment, Range trees?
For each segment make [a_i,b_i]-range increment-by-1 query to T. It allows to find maximum number of segments covering some x in [a,b]
Iterate over elements x of X. For each x process segments (not already processed) with x >= a_i. The processing includes pushing b_i to Q and making [a_i,b_i]-range increment-by-(-1) query to T. After removing from Q all elements < x, A= Q.size is equal to number of segments covering x. B = T.rmq(x + 1, M) returns maximum number of segments that do not cover x and cover some fixed y > x. A + B is a candidate for an answer.
Source:
http://www.quora.com/What-are-the-intended-solutions-for-the-Touching-segments-and-the-Smallest-String-and-Regex-problems-from-the-Cisco-Software-Challenge-held-on-Hackerrank

fully connection algorithm

I have encoutered an algorithm question:
Fully Connection
Given n cities which spreads along a line, let Xi be the position of city i and Pi be its population.
Now we begin to lay cables between every two of the cities based on their distance and population. Given two cities i and j, the cost to lay cable between them is |Xi-Xj|*max(Pi,Pj). How much does it cost to lay all the cables?
For example, given:
i Xi Pi
- -- --
1 1 4
2 2 5
3 3 6
Then the total cost can be calculated as:
i j |Xi-Xj| max(Pi, Pj) Segment Cost
- - ------ ----------- ------------
1 2 1 5 5
2 3 1 6 6
1 3 2 6 12
So that the total cost is 5+6+12 = 23.
While this can clearly be done in O(n2) time, can it be done in asymptotically less time?
I can think of faster solution. If I am not wrong it goes to O(n*logn). Now let's first sort all the cities according to Pi. This is O(n* log n). Then we start processing the cities in increasing order of Pi. the reason being - you always know you have max (Pi, Pj) = Pi in this case. We only want to add all the segments that come from relations with Pi. Those that will connect with larger indexes will be counted when they will be processed.
Now the thing I was able to think of was to use several index trees in order to reduce the complexity of the algorithm. First index tree is counting the number of nodes and can process queries of the kind: how many nodes are to the right of xi in logarithmic time. Lets call this number NR. The second index tree can process queries of the kind: what is the sum of distances from all the points to the right of a given x. The distances are counted towards a fixed point that is guaranteed to be to the right of the rightmost point, lets call its x XR.Lets call this number SUMD. Then the sum of the distances to all points to the right of our point can be found that way: NR * dist(Xi, XR) - SUMD. Then all these contribute (NR * dist(Xi, XR) - SUMD) *Pi to the result. The same for the left points and you get the answer. After you process the ith point you add it to the index trees and can go on.
Edit: Here is one article about Biary index trees: http://community.topcoder.com/tc?module=Static&d1=tutorials&d2=binaryIndexedTrees
This is the direct connections problem from codesprint 2.
They will be posting worked solutions to all problems within a week on their website.
(They have said "Now that the contest is over, we're totally cool with everyone discussing their solutions to the problems.")

Fake Coin Problem

Classic problem with 12 coins ( or marbles) one of which is fake. Fake coin assumed to be lighter than real one.
Having scales to compare coins (or marbles).
One can do comparison one by one and compare all 12 coins.
More efficiently one can do it using Decrease By Factor algorithm. Which is divide the coin stack by 2 and compare 2 stacks on the scales.
Big O of the decrease by factor 2 is log2n.
There is more efficient decrease by factor 3 (log3n) algorithm but I have not yet found it.
If anyone explains it and why it is more efficient let me know.
The main idea here is to use more knowledge of the problem in setting up your test: If you separate into 3 instead of two stacks and do a weighing with two of those stacks (each containing the same number of coins), you can have only two cases given that the single fake coin can be in only one of these three stacks:
1.) Both sides have identical weight: the fake coin cannot be in the two stacks weighed, so must be in the 3rd: you reduced the problem space to 1/3
2.) One side weighs more than the other: since there is only one fake coin it must be on the side that weighs less: again you reduced the problem space to 1/3
Rinse and repeat.
The "decrease by 3" algorithm works on the principle that you can reduce the set of marbles you have to compare by 1/3 by doing only 1 comparison.
Split the marbles into 3 groups, and weight 2 of them, say group 1 and 2.
If weight of group 1 == weight of group 2 then group 3 has the fake marble
If weight of group 1 < weight of group 2 then group 1 has the fake marble
If weight of group 1 > weight of group 2 then group 2 has the fake marble
Of course this assumes that the original set of marbles can be split evenly into 3 groups. If that's not the case then split into 3 groups evenly ( each group has the same number of marbles ) and keep the remaining ( 0,1, or 2 ) marbles on the side and add them back to the group of marbles you have to consider after the comparison step.

Finding good heuristic for A* search

I'm trying to find the optimal solution for a little puzzle game called Twiddle (an applet with the game can be found here). The game has a 3x3 matrix with the number from 1 to 9. The goal is to bring the numbers in the correct order using the minimum amount of moves. In each move you can rotate a 2x2 square either clockwise or counterclockwise.
I.e. if you have this state
6 3 9
8 7 5
1 2 4
and you rotate the upper left 2x2 square clockwise you get
8 6 9
7 3 5
1 2 4
I'm using a A* search to find the optimal solution. My f() is simply the number of rotations needed. My heuristic function already leads to the optimal solution (if I modify it, see the notice a t the end) but I don't think it's the best one you can find. My current heuristic takes each corner, looks at the number at the corner and calculates the manhatten distance to the position this number will have in the solved state (which gives me the number of rotation needed to bring the number to this postion) and sums all these values. I.e. You take the above example:
6 3 9
8 7 5
1 2 4
and this end state
1 2 3
4 5 6
7 8 9
then the heuristic does the following
6 is currently at index 0 and should by at index 5: 3 rotations needed
9 is currently at index 2 and should by at index 8: 2 rotations needed
1 is currently at index 6 and should by at index 0: 2 rotations needed
4 is currently at index 8 and should by at index 3: 3 rotations needed
h = 3 + 2 + 2 + 3 = 10
Additionally, if h is 0, but the state is not completely ordered, than h = 1.
But there is the problem, that you rotate 4 elements at once. So there a rare cases where you can do two (ore more) of theses estimated rotations in one move. This means theses heuristic overestimates the distance to the solution.
My current workaround is, to simply excluded one of the corners from the calculation which solves this problem at least for my test-cases. I've done no research if really solves the problem or if this heuristic still overestimates in some edge-cases.
So my question is: What is the best heuristic you can come up with?
(Disclaimer: This is for a university project, so this is a bit of homework. But I'm free to use any resource if can come up with, so it's okay to ask you guys. Also I will credit Stackoverflow for helping me ;) )
Simplicity is often most effective. Consider the nine digits (in the rows-first order) as forming a single integer. The solution is represented by the smallest possible integer i(g) = 123456789. Hence I suggest the following heuristic h(s) = i(s) - i(g). For your example, h(s) = 639875124 - 123456789.
You can get an admissible (i.e., not overestimating) heuristic from your approach by taking all numbers into account, and dividing by 4 and rounding up to the next integer.
To improve the heuristic, you could look at pairs of numbers. If e.g. in the top left the numbers 1 and 2 are swapped, you need at least 3 rotations to fix them both up, which is a better value than 1+1 from considering them separately. In the end, you still need to divide by 4. You can pair up numbers arbitrarily, or even try all pairs and find the best division into pairs.
All elements should be taken into account when calculating distance, not just corner elements. Imagine that all corner elements 1, 3, 7, 9 are at their home, but all other are not.
It could be argued that those elements that are neighbors in the final state should tend to become closer during each step, so neighboring distance can also be part of heuristic, but probably with weaker influence than distance of elements to their final state.

Resources