Related
Introduction
I have a bipartite graph with workers(W) and Tasks(T).
Te goal is assign all task to the workers to minimize the maximum time spend. IE finish the last tasks as soon as possible.
Question
What modification to the Hungarian algorithm have to be done to accomplish this task.
If Hungarian algorithm is not useful what could be a good mathematical approach?
Mathematically i don't know how to work with multiple task assignments for workers.
I will implement it in python once i understand the math theory.
Problem
Conditions:
A task can only be assigned to one worker
There isn't any restriction in the amount of task
All task must be assigned
A worker could have multiple task assigned
There isn't any restriction in the amount of workers
A worker could have no assignation.
Workers are not free to start working at the same time
Example
If i have 7 task T={T₁, T₂, T₃, T₄, T₅, T₆, T₇} and 3 workers W={W₁, W₂, W₃}, workers will be free to start working in F={4, 7, 8} (where Fᵢ is the time Wᵢ needs to be free to start working) and the cost matrix is:
A matching example could be (not necessary correct in this case, is just an example):
W₁ = {T₁, T₂, T₃}
W₂ = {T₄, T₅}
W₃ = {T₆, T₇}
in this case the time expend for each worker is:
Time(W₁) = 4+5+4+3 = 16
Time(W₂) = 7+4+9 = 20
Time(W₃) = 8+1+7 = 16
Explained as:
For W₁, we have to wait for:
4 till he is free
after that he will finish T₁ in 5
T₂ in 4
T₃ in 3
giving a total time of 16.
For W₂, we have to wait for:
7 till he is free
After that he will finish T₄ in 4
T₅ in 9
Giving a total time of 20.
For W₃, we have to wait for:
8 till he is free
after that he will finish T₆ in 1
T₇ in 7
Giving a total time of 16.
Goal
Minimize the maximum total time. Not the sum of totals.
If Total times {9, 6, 6} (sum 21) is a solution then {9, 9, 9} (sum 27) is a solution too and {10, 1, 1} (sum 12) is not because in the first and second case the last task is finished at time 9 and in the third case in time 10.
Given 2 Lists that indicates the arriving time and leaving time of each guest to a party how can I find the largest number of guests (or who they'r) that hangs together for at least minTime seconds?
Example:
Input:
List<float> ArrivingList = {3,2,8,12,5};
List<float> LevingList = {17,7,19,15,11};
int minTime = 4;
Meaning that first guest arrives at time 3 and leave at time 17, second guest arrives at time 2 and leave at time 7. etc...
Output: {0,1}; //Or {0,4} both are correct for this example.
I know how to solve it without the minTime demand, but this version I just couldn't figure out.
EDIT: Please note that my question is NOT a duplicate of this one.
I'm looking for the maximum number of guests that DO overlap AND for a defined period of time.
Edit 2 My goal is to get the largest overlapping subset of the guests that spends minTime together.
Example 2:
Input:
List<float> ArrivingList = {1,2,3};
List<float> LevingList = {4,5,6};
int minTime = 3;
Consider the interval (2,5). Even though there is an overlap of 3 seconds it's not continues and switch between guest #0 and guest #2.
`Output:` {0};// or {1} or {2} because all of the guests spends the min time some time but never together
I guess you can use the following algorithm:
Init answer as empty array
For each pair of guess i,j:
OverlapTime = min(leaving(i),leaving(j)) - max(arriving(i),arriving(j))
If overlapTime >= minTime:
Push (i,j) to answer array
This will be O(n^2)
Assume we have a set of n jobs to execute, each of which takes unit time. At any time we can serve exactly one job. Job i, 1<=i<=n earns us a profit if and only if it is executed no later than its deadline.
We can a set of jobs feasible if there exists at least one sequence that allows each job in the set to be performed no later than their deadline. "Earliest deadline first" is feasible.
Show that the greedy algorithm is optimal: Add in every step the job with the highest value of profit among those not yet considered, provided that the chosen set of jobs remains feasible.
MUST DO THIS FIRST: show first that is always possible to re-schedule two feasible sequences (one computed by Greedy) in a way that every job common to both sequences is scheduled at the same time. This new sequence might contain gaps.
UPDATE
I created an example that seems to disprove the algorithm:
Assume 4 jobs:
Job A has profit 1, time duration 2, deadline before day 3;
Job B has profit 4, time duration 1, deadline before day 4;
Job C has profit 3, time duration 1, deadline before day 3;
Job D has profit 2, time duration 1, deadline before day 2.
If we use greedy algorithm with the highest profit first, then we only get job B & C. However, if we do deadline first, then we can get all jobs and the order is CDB
Not sure if I am approaching this question in the right way, since I created an example to disprove what the question wants
This problem looks like Job Shop Scheduling, which is NP-complete (which means there's no optimal greedy algorithm - despite that experts are trying to find one since the 70's). Here's a video on a more advanced form of that use case that is being solved with a Greedy algorithm followed by Local Search.
If we presume your use case can indeed be relaxed to Job Shop Scheduling, than there are many optimization algorithms that can help, such as Metaheuristics (including Local Search such as Tabu Search and Simulated Annealing), (M)IP, Dynamic Programming, Constraint Programming, ... The reason there are so many choices, is because none are perfect. I prefer Metaheuristics, as they out-scale the others in all the research challenges I've seen.
In fact, neither "earliest deadline first", "highest profit first" nor "highest profit/duration first" are correct algorithm...
Assume 2 jobs:
Job A has profit 1, time duration 1, deadline before day 1;
Job B has profit 2, time duration 2, deadline before day 2;
Then "earliest deadline first" fails to get correct answer. Correct answer is B.
Assume another 5 jobs:
Job A has profit 2, time duration 3, deadline before day 3;
Job B has profit 1, time duration 1, deadline before day 1;
Job C has profit 1, time duration 1, deadline before day 2;
Job D has profit 1, time duration 1, deadline before day 3;
Job E has profit 1, time duration 1, deadline before day 4;
Then "highest profit first" fails to get correct answer. Correct answer is BCDE.
Assume another 4 jobs:
Job A has profit 6, time duration 4, deadline before day 6;
Job B has profit 4, time duration 3, deadline before day 6;
Job C has profit 4, time duration 3, deadline before day 6;
Job D has profit 0.0001, time duration 2, deadline before day 6;
Then "highest profit/duration first" fails to get correct answer. Correct answer is BC (Thanks for #dognose's counter-example, see comment).
One correct algorithm is Dynamic Programming:
First order by deadline ascending. dp[i][j] = k means within the first i jobs and within j time units we can get k highest profit. Then initially dp[0][0] = 0.
Jobs info are stored in 3 arrays: profit are stored in profit[i], 1<=i<=n, time duration are stored in time[i], 1<=i<=n, deadline are stored in deadline[i], 1<=i<=n.
// sort by deadline in ascending order
...
// initially 2 dimension dp array are all -1, -1 means this condition unreachable
...
dp[0][0] = 0;
int maxDeadline = max(deadline); // max value of deadline
for(int i=0;i<n;i++) {
for(int j=0;j<=maxDeadline;j++) {
// if do task i+1 satisfy deadline condition, try to update condition for "within the first i+1 jobs, cost j+time[i+1] time units, what's the highest total profit will be"
if(dp[i][j] != -1 && j + time[i+1] <= deadline[i+1]) {
dp[i+1][j+time[i+1]] = max(dp[i+1][j+time[i+1]], dp[i][j] + profit[i+1]);
}
}
}
// the max total profit can get is max value of 2 dimension dp array
The time/space complexity (which is n*m, n is job count, m is maximum deadline) of DP algorithm is highly dependent on how many jobs and the maximum deadline. If n and/or m is rather large, it maybe difficult to get answer, while for common use, it will work well.
The problem is called Job sequencing with deadlines, and can be solved by two algorithms based on greedy strategy:
Sort input jobs decreasing on profit. For every job put it in list of jobs of solution sorted increasingly on deadline. If after including a job some jobs in solution has index grater than deadline, do not include this job.
Sort input jobs decreasing on profit. For every job put it in the list of job of solution on the last possible index. If there is no free index less or equal to the job deadline, do not include the job.
public class JOB {
public static void main(String[] args) {
char name[]={'1','2','3','4'};
int dl[] = {1,1,4,1};
int profit[] ={40,30,20,10};
char cap[] = new char[2];
for (int i =0;i<2 ;i++)
{
cap[i]='\0';
}
int j;
int i =0;
j = dl[i]-1;
while (i<4)
{
if(j<0) {
i++;
if(i<4)
j = dl[i]-1;
}
else if(j<2 && cap[j]=='\0')
{
cap[j] = name[i];
i++;
if(i<4)
j = dl[i]-1;
}
else
j=j-1;
}
for (int i1 =0 ; i1< 2 ; i1++)
System.out.println(cap[i1]);
}
}
Can anyone help me with some algorithm for this problem?
We have a big number (19 digits) and, in a loop, we subtract one of the digits of that number from the number itself.
We continue to do this until the number reaches zero. We want to calculate the minimum number of subtraction that makes a given number reach zero.
The algorithm must respond fast, for a 19 digits number (10^19), within two seconds. As an example, providing input of 36 will give 7:
1. 36 - 6 = 30
2. 30 - 3 = 27
3. 27 - 7 = 20
4. 20 - 2 = 18
5. 18 - 8 = 10
6. 10 - 1 = 9
7. 9 - 9 = 0
Thank you.
The minimum number of subtractions to reach zero makes this, I suspect, a very thorny problem, one that will require a great deal of backtracking potential solutions, making it possibly too expensive for your time limitations.
But the first thing you should do is a sanity check. Since the largest digit is a 9, a 19-digit number will require about 1018 subtractions to reach zero. Code up a simple program to continuously subtract 9 from 1019 until it becomes less than ten. If you can't do that within the two seconds, you're in trouble.
By way of example, the following program (a):
#include <stdio.h>
int main (int argc, char *argv[]) {
unsigned long long x = strtoull(argv[1], NULL, 10);
x /= 1000000000;
while (x > 9)
x -= 9;
return x;
}
when run with the argument 10000000000000000000 (1019), takes a second and a half clock time (and CPU time since it's all calculation) even at gcc insane optimisation level of -O3:
real 0m1.531s
user 0m1.528s
sys 0m0.000s
And that's with the one-billion divisor just before the while loop, meaning the full number of iterations would take about 48 years.
So a brute force method isn't going to help here, what you need is some serious mathematical analysis which probably means you should post a similar question over at https://math.stackexchange.com/ and let the math geniuses have a shot.
(a) If you're wondering why I'm getting the value from the user rather than using a constant of 10000000000000000000ULL, it's to prevent gcc from calculating it at compile time and turning it into something like:
mov $1, %eax
Ditto for the return x which will prevent it noticing I don't use the final value of x and hence optimise the loop out of existence altogether.
I don't have a solution that can solve 19 digit numbers in 2 seconds. Not even close. But I did implement a couple of algorithms (including a dynamic programming algorithm that solves for the optimum), and gained some insight that I believe is interesting.
Greedy Algorithm
As a baseline, I implemented a greedy algorithm that simply picks the largest digit in each step:
uint64_t countGreedy(uint64_t inputVal) {
uint64_t remVal = inputVal;
uint64_t nStep = 0;
while (remVal > 0) {
uint64_t digitVal = remVal;
uint_fast8_t maxDigit = 0;
while (digitVal > 0) {
uint64_t nextDigitVal = digitVal / 10;
uint_fast8_t digit = digitVal - nextDigitVal * 10;
if (digit > maxDigit) {
maxDigit = digit;
}
digitVal = nextDigitVal;
}
remVal -= maxDigit;
++nStep;
}
return nStep;
}
Dynamic Programming Algorithm
The idea for this is that we can calculate the optimum incrementally. For a given value, we pick a digit, which adds one step to the optimum number of steps for the value with the digit subtracted.
With the target function (optimum number of steps) for a given value named optSteps(val), and the digits of the value named d_i, the following relationship holds:
optSteps(val) = 1 + min(optSteps(val - d_i))
This can be implemented with a dynamic programming algorithm. Since d_i is at most 9, we only need the previous 9 values to build on. In my implementation, I keep a circular buffer of 10 values:
static uint64_t countDynamic(uint64_t inputVal) {
uint64_t minSteps[10] = {1, 1, 1, 1, 1, 1, 1, 1, 1, 1};
uint_fast8_t digit0 = 0;
for (uint64_t val = 10; val <= inputVal; ++val) {
digit0 = val % 10;
uint64_t digitVal = val;
uint64_t minPrevStep = 0;
bool prevStepSet = false;
while (digitVal > 0) {
uint64_t nextDigitVal = digitVal / 10;
uint_fast8_t digit = digitVal - nextDigitVal * 10;
if (digit > 0) {
uint64_t prevStep = 0;
if (digit > digit0) {
prevStep = minSteps[10 + digit0 - digit];
} else {
prevStep = minSteps[digit0 - digit];
}
if (!prevStepSet || prevStep < minPrevStep) {
minPrevStep = prevStep;
prevStepSet = true;
}
}
digitVal = nextDigitVal;
}
minSteps[digit0] = minPrevStep + 1;
}
return minSteps[digit0];
}
Comparison of Results
This may be considered a surprise: I ran both algorithms on all values up to 1,000,000. The results are absolutely identical. This suggests that the greedy algorithm actually calculates the optimum.
I don't have a formal proof that this is indeed true for all possible values. It intuitively kind of makes sense to me. If in any given step, you choose a smaller digit than the maximum, you compromise the immediate progress with the goal of getting into a more favorable situation that allows you to catch up and pass the greedy approach. But in all the scenarios I thought about, the situation after taking a sub-optimal step just does not get significantly more favorable. It might make the next step bigger, but that is at most enough to get even again.
Complexity
While both algorithms look linear in the size of the value, they also loop over all digits in the value. Since the number of digits corresponds to log(n), I believe the complexity is O(n * log(n)).
I think it's possible to make it linear by keeping counts of the frequency of each digit, and modifying them incrementally. But I doubt it would actually be faster. It requires more logic, and turns a loop over all digits in the value (which is in the range of 2-19 for the values we are looking at) into a fixed loop over 10 possible digits.
Runtimes
Not surprisingly, the greedy algorithm is faster to calculate a single value. For example, for value 1,000,000,000, the runtimes on my MacBook Pro are:
greedy: 3 seconds
dynamic: 36 seconds
On the other hand, the dynamic programming approach is obviously much faster at calculating all the values, since its incremental approach needs them as intermediate results anyway. For calculating all values from 10 to 1,000,000:
greedy: 19 minutes
dynamic: 0.03 seconds
As already shown in the runtimes above, the greedy algorithm gets about as high as 9 digit input values within the targeted runtime of 2 seconds. The implementations aren't really tuned, and it's certainly possible to squeeze out some more time, but it would be fractional improvements.
Ideas
As already explored in another answer, there's no chance of getting the result for 19 digit numbers in 2 seconds by subtracting digits one by one. Since we subtract at most 9 in each step, completing this for a value of 10^19 needs more than 10^18 steps. We mostly use computers that perform in the rough range of 10^9 operations/second, which suggests that it would take about 10^9 seconds.
Therefore, we need something that can take shortcuts. I can think of scenarios where that's possible, but haven't been able to generalize it to a full strategy so far.
For example, if your current value is 9999, you know that you can subtract 9 until you reach 9000. So you can calculate that you will make 112 steps ((9999 - 9000) / 9 + 1) where you subtract 9, which can be done in a few operations.
As said in comments already, and agreeing with #paxdiablo’s other answer, I’m not sure if there is an algorithm to find the ideal solution without some backtracking; and the size of the number and the time constraint might be tough as well.
A general consideration though: You might want to find a way to decide between always subtracting the highest digit (which will decrease your current number by the largest possible amount, obviously), and by looking at your current digits and subtracting which of those will give you the largest “new” digit.
Say, your current number only consists of digits between 0 and 5 – then you might be tempted to subtract the 5 to decrease your number by the highest possible value, and continue with the next step. If the last digit of your current number is 3 however, then you might want to subtract 4 instead – since that will give you 9 as new digit at the end of the number, instead of “only” 8 you would be getting if you subtracted 5.
Whereas if you have a 2 and two 9 in your digits already, and the last digit is a 1 – then you might want to subtract the 9 anyway, since you will be left with the second 9 in the result (at least in most cases; in some edge cases it might get obliterated from the result as well), so subtracting the 2 instead would not have the advantage of giving you a “high” 9 that you would otherwise not have in the next step, and would have the disadvantage of not lowering your number by as high an amount as subtracting the 9 would …
But every digit you subtract will not only affect the next step directly, but the following steps indirectly – so again, I doubt there is a way to always chose the ideal digit for the current step without any backtracking or similar measures.
I am preparing for a software job interview, and I am having trouble with in-place array modifications.
For example, in the out-shuffle problem you interleave two halves of an array so that 1 2 3 4 5 6 7 8 would become 1 5 2 6 3 7 4 8. This question asks for a constant-memory solution (and linear-time, although I'm not sure that's even possible).
First I thought a linear algorithm is trivial, but then I couldn't work it out. Then I did find a simple O(n^2) algorithm but it took me a long time. And I still don't find a faster solution.
I remember also having trouble solving a similar problem from Bentley's Programming Pearls, column 2:
Rotate an array left by i positions (e.g. abcde rotated by 2 becomes cdeab), in time O(n) and with just a couple of bytes extra space.
Does anyone have tips to help wrap my head around such problems?
About an O(n) time, O(1) space algorithm for out-shuffle
Doing an out-shuffle in O(n) time and O(1) space is possible, but it is tough. Not sure why people think it is easy and are suggesting you try something else.
The following paper has an O(n) time and O(1) space solution (though it is for in-shuffle, doing in-shuffle makes out-shuffle trivial):
http://arxiv.org/PS_cache/arxiv/pdf/0805/0805.1598v1.pdf
About a method to tackle in-place array modification algorithms
In-place modification algorithms could become very hard to handle.
Consider a couple:
Inplace out-shuffle in linear time. Uses number theory.
In-place merge sort, was open for a few years. An algorithm came but was too complicated to be practical. Uses very complicated bookkeeping.
Sorry, if this sounds discouraging, but there is no magic elixir that will solve all in-place algorithm problems for you. You need to work with the problem, figure out its properties, and try to exploit them (as is the case with most algorithms).
That said, for array modifications where the result is a permutation of the original array, you can try the method of following the cycles of the permutation. Basically, any permutation can be written as a disjoint set of cycles (see John's answer too). For instance the permutation:
1 4 2 5 3 6
of 1 2 3 4 5 6 can be written as
1 -> 1
2 -> 3 -> 5 -> 4 -> 2
6 -> 6.
you can read the arrow as 'goes to'.
So to permute the array 1 2 3 4 5 6 you follow the three cycles:
1 goes to 1.
6 goes to 6.
2 goes to 3, 3 goes to 5, 5 goes to 4, and 4 goes to 2.
To follow this long cycle, you can use just one temp variable. Store 3 in it. Put 2 where 3 was. Now put 3 in 5 and store 5 in the temp and so on. Since you only use constant extra temp space to follow a particular cycle, you are doing an in-place modification of the array for that cycle.
Now if I gave you a formula for computing where an element goes to, all you now need is the set of starting elements of each cycle.
A judicious choice of the starting points of the cycles can make the algorithm easy. If you come up with the starting points in O(1) space, you now have a complete in-place algorithm. This is where you might actually have to get familiar with the problem and exploit its properties.
Even if you didn't know how to compute the starting points of the cycles, but had a formula to compute the next element, you could use this method to get an O(n) time in-place algorithm in some special cases.
For instance: if you knew the array of unsigned integers held only positive integers.
You can now follow the cycles, but negate the numbers in them as an indicator of 'visited' elements. Now you can walk the array and pick the first positive number you come across and follow the cycles for that, making the elements of the cycle negative and continue to find untouched elements. In the end, you just make all the elements positive again to get the resulting permutation.
You get an O(n) time and O(1) space algorithm! Of course, we kind of 'cheated' by using the sign bits of the array integers as our personal 'visited' bitmap.
Even if the array was not necessarily integers, this method (of following the cycles, not the hack of sign bits :-)) can actually be used to tackle the two problems you state:
The in-shuffle (or out-shuffle) problem: When 2n+1 is a power of 3, it can be shown (using number theory) that 1,3,3^2, etc are in different cycles and all cycles are covered using those. Combine this with the fact that the in-shuffle is susceptible to divide and conquer, you get an O(n) time, O(1) space algorithm (the formula is i -> 2*i modulo 2n+1). Refer to the above paper for more details.
The cyclic shift an array problem: Cyclic shift an array of size n by k also gives a permutation of the resulting array (given by the formula i goes to i+k modulo n), and can also be solved in linear time and in-place using the following the cycle method. In fact, in terms of the number of element exchanges this following cycle method is better than the 3 reverses algorithm. Of course, following the cycle method can kill the cache because of the access patterns, and in practice, the 3 reverses algorithm might actually fare better.
As for interviews, if the interviewer is a reasonable person, they will be looking at how you think and approach the problem and not whether you actually solve it. So even if you don't solve a problem, I think you should not be discouraged.
The basic strategy with in place algorithms is to figure out the rule for moving a entry from slot N to slot M.
So, your shuffle, for instance. if A and B are cards and N is the number of chards. the rules for the first half of the deck are different than the rules for the second half of the deck
// A is the current location, B is the new location.
// this math assumes that the first card is card 0
if (A < N/2)
B = A * 2;
else
B = (A - N/2) * 2 + 1;
Now we know the rule, we just have to move each card, each time we move a card, we calculate the new location, then remove the card that is currently in B. place A in slot B, then let B be A, and loop back to the top of the algorithm. Each card moved displaces the new card which becomes the next card to be moved.
I think the analysis is easier if we are 0 based rather than 1 based, so
0 1 2 3 4 5 6 7 // before
0 4 1 5 2 6 3 7 // after
So we want to move 1->2 2->4 4->1 and that completes a cycle
then move 3->6 6->5 5->3 and that completes a cycle
and we are done.
Now we know that card 0 and card N-1 don't move, so we can ignore those,
so we know that we only need to swap N-2 cards in total. The only sticky bit
is that there are 2 cycles, 1,2,4,1 and 3,6,5,3. when we get to card 1 the
second time, we need to move on to card 3.
int A = 1;
int N = 8;
card ary[N]; // Our array of cards
card a = ary[A];
for (int i = 0; i < N/2; ++i)
{
if (A < N/2)
B = A * 2;
else
B = (A - N/2) * 2 + 1;
card b = ary[B];
ary[B] = a;
a = b;
A = B;
if (A == 1)
{
A = 3;
a = ary[A];
}
}
Now this code only works for the 8 card example, because of that if test that moves us from 1 to 3 when we finish the first cycle. What we really need is a general rule to recognize the end of the cycle, and where to go to start the next one.
That rule could be mathematical if you can think of a way, or you could keep track of which places you had visited in a separate array, and when A is back to a visited place, you could then scan forward in your array looking for the first non-visited place.
For your in-place algorithm to be 0(n), the solution will need to be mathematical.
I hope this breakdown of the thinking process is helpful to you. If I was interviewing you, I would expect to see something like this on the whiteboard.
Note: As Moron points out, this doesn't work for all values of N, it's just an example of the sort of analysis that an interviewer is looking for.
Frank,
For programming with loops and arrays, nothing beats David Gries's textbook The Science of Programming. I studied it over 20 years ago, and there are ideas that I still use every day. It is very mathematical and will require real effort to master, but that effort will repay you many times over for your whole career.
Complementing Aryabhatta's answer:
There is a general method to "follow the cycles" even without knowing the starting positions for each cycle or using memory to know visited cycles. This is specially useful if you need O(1) memory.
For each position i in the array, follow the cycle without moving any data yet, until you reach...
the starting position i: end of the cyle. this is a new cycle: follow it again moving the data this time.
a position lower than i: this cycle was already visited, nothing to do with it.
Of course this has a time overhead (O(n^2), I believe) and has the cache problems of the general "following cycles" method.
For the first one, let's assume n is even. You have:
first half: 1 2 3 4
second : 5 6 7 8
Let x1 = first[1], x2 = second[1].
Now, you have to print one from the first half, one from the second, one from the first, one from the second...
Meaning first[1], second[1], first[2], second[2], ...
Obviously, you don't keep two halves in memory, as that will be O(n) memory. You keep pointers to the two halves. Do you see how you'd do that?
The second is a bit harder. Consider:
12345
abcde
..cde
.....ab
..cdeab
cdeab
Do you notice anything? You should notice that the question basically asks you to move the first i characters to the end of your string, without affording the luxury of copying the last n - i in a buffer then appending the first i and then returning the buffer. You need to do with O(1) memory.
To figure how to do this you basically need a lot of practice with these kinds of problems, as with anything else. Practice makes perfect basically. If you've never done these kinds of problems before, it's unlikely you'll figure it out. If you have, then you have to think about how you can manipulate the substrings and or indices such that you solve your problem under the given constraints. The general rule is to work and learn as much as possible so you'll figure out the solutions to these problems very fast when you see them. But the solution differs quite a bit from problem to problem. There's no clear recipe for success I'm afraid. Just read a lot and understand the stuff you read before you move on.
The logic for the second problem is this: what happens if we reverse the substring [1, 2], the substring [3, 5] and then concatenate them and reverse that? We have, in general:
1, 2, 3, 4, ..., i, i + 1, i + 2, ..., N
reverse [1, i] =>
i, i - 1, ..., 4, 3, 2, 1, i + 1, i + 2, ..., N
reverse [i + 1, N] =>
i, i - 1, ..., 4, 3, 2, 1, N, ..., i + 1
reverse [1, N] =>
i + 1, ..., N, 1, 2, 3, 4, ..., i - 1, i
which is what you wanted. Writing the reverse function using O(1) memory should be trivial.
Generally speaking, the idea is to loop through the array once, while
storing the value at the position you are at in a temporary variable
finding the correct value for that position and writing it
either move on to the next value, or figure out what to do with your temporary value before continuing.
A general approach could be as follows:
Construct a positions array int[] pos, such that pos[i] refers to the position (index) of a[i] in the shuffled array.
Rearrange the original array int[] a, according to this positions array pos.
/** Shuffle the array a. */
void shuffle(int[] a) {
// Step 1
int [] pos = contructRearrangementArray(a)
// Step 2
rearrange(a, pos);
}
/**
* Rearrange the given array a according to the positions array pos.
*/
private static void rearrange(int[] a, int[] pos)
{
// By definition 'pos' should not contain any duplicates, otherwise rearrange() can run forever.
// Do the above sanity check.
for (int i = 0; i < pos.length; i++) {
while (i != pos[i]) {
// This while loop completes one cycle in the array
swap(a, i, pos[i]);
swap(pos, i, pos[i]);
}
}
}
/** Swap ith element in a with jth element. */
public static void swap(int[] a, int i, int j)
{
int temp = a[i];
a[i] = a[j];
a[j] = temp;
}
As an example, for the case of outShuffle the following would be an implementation of contructRearrangementArray().
/**
* array : 1 2 3 4 5 6 7 8
* pos : 0 2 4 6 1 3 5 7
* outshuffle: 1 5 2 6 3 7 4 8 (outer boundaries remain same)
*/
public int[] contructRearrangementArray(int[] a)
{
if (a.length % 2 != 0) {
throw new IllegalArgumentException("Cannot outshuffle odd sized array");
}
int[] pos = new int[a.length];
for (int i = 0; i < pos.length; i++) {
pos[i] = i * 2 % (pos.length - 1);
}
pos[a.length - 1] = a.length - 1;
return pos;
}