Hey im coding quicksort atm. Im making the last element in the array my pivot. Specific case:
[7,6,8]
8 would be the pivot, 7 is low and 6 is high.
Since 7 < 8 low increases by 1. But now low and high are the same. So 6 gets switched with 8 which would make the array [7,8,6]. But it should be [6,7,8] obviously.
What am i forgetting to implement?
Figured it out, basically had to add what to do with that scenario where the low pointer goes the high pointer, without the high pointer changing.
Related
I'm creating probability assistant for Battleship game - in essence, for given game state (field state and available ships), it would produce field where all free cells will have probability of hit.
My current approach is to do a monte-carlo like computation - get random free cell, get random ship, get random ship rotation, check if this placement is valid, if so continue with next ship from available set. If available set is empty, add how the ships were set to output stack. Redo this multiple times, use outputs to compute probability of each cell.
Is there sane algorithm to process all possible ship placements for given field state?
An exact solution is possible. But does not qualify as sane in my books.
Still, here is the idea.
There are many variants of the game, but let's say that we start with a worst case scenario of 1 ship of size 5, 2 of size 4, 3 of size 3 and 4 of size 2.
The "discovered state" of the board is all spots where shots have been taken, or ships have been discovered, plus the number of remaining ships. The discovered state naively requires 100 bits for the board (10x10, any can be shot) plus 1 bit for the count of remaining ships of size 5, 2 bits for the remaining ships of size 4, 2 bits for remaining ships of size 3 and 3 bits for remaining ships of size 2. This makes 108 bits, which fits in 14 bytes.
Now conceptually the idea is to figure out the map by shooting each square in turn in the first row, the second row, and so on, and recording the game state along with transitions. We can record the forward transitions and counts to find how many ways there are to get to any state.
Then find the end state of everything finished and all ships used and walk the transitions backwards to find how many ways there are to get from any state to the end state.
Now walk the data structure forward, knowing the probability of arriving at any state while on the way to the end, but this time we can figure out the probability of each way of finding a ship on each square as we go forward. Sum those and we have our probability heatmap.
Is this doable? In memory, no. In a distributed system it might be though.
Remember that I said that recording a state took 14 bytes? Adding a count to that takes another 8 bytes which takes us to 22 bytes. Adding the reverse count takes us to 30 bytes. My back of the envelope estimate is that at any point in our path there are on the order of a half-billion states we might be in with various ships left, killed ships sticking out and so on. That's 15 GB of data. Potentially for each of 100 squares. Which is 1.5 terabytes of data. Which we have to process in 3 passes.
This is my first post on Stack Overflow, so please excuse my mistakes if I'm doing something wrong.
Ok so I'm trying to find an algorithm/function/something that can calculate how many times I have to do the same type of shuffle of 52 playing cards to get back to where I started.
The specific shuffle I'm using goes like this:
-You will have two piles.
-You have the deck with the back facing up. (Lets call this pile 1)
-You will now alternate between putting a card in the back of pile 1 Example: Let's say you have 4 cards in a pile, back facing up, going from 4 closest to the ground and 1 closest to the sky (Their order is 4,3,2,1. You take card 1 and put it beneath card 4 mening card 1 is now closest to the ground and card 4 is second closest, order is now 1,4,3,2. and putting one in pile 2. -Pile 2 will "stack downwards" meaning you will always put the new card at the bottom of that pile. (Back always facing up)
-The first card will always get put at the back of pile 1.
-Repeat this process until all cards are in pile 2.
-Now take pile 2 and do the exact same thing you just did.
My question is: How many times do I have to repeat this process until I get back where I started?
Side notes:
- If this is a common way of shuffling cards and there already is a solution, please let me know.
- I'm still new to math and coding so if writing up an equation/algorithm/code for this is really easy then don't laugh at me pls ;<.
- Sorry if I'm asking this at the wrong place, I don't know how all this works.
- English isn't my main language and I'm not a native speaker either so please excuse any bad grammar and/or other grammatical errors.
I do however have a code that does all of this (Link here) but I'm unsure if it's the most effective way to do it, and it hasn't given a result yet so I don't even know if it works. If you wan't to give tips or suggestions on how to change it then please do, I would really appreciate it. It's done in scratch however because I can't write in any other languages... sorry...
Thanks in advance.
Any fixed shuffle is equivalent to a permutation; what you want to know is the order of that permutation. This can be computed by decomposing the permutation into cycles and then computing the least common multiple of the cycle lengths.
I'm not able to properly understand your algorithm, but here's an example of shuffling 8 elements and then finding the number of times that shuffle needs to be repeated to get back to an unshuffled state.
Suppose the sequence starts as 1,2,3,4,5,6,7,8 and after one shuffle, it's 3,1,4,5,2,8,7,6.
The number 1 goes to position 2, then 2 goes to position 5, then 5 goes to position 4, then 4 goes to position 3, then 3 goes to position 1. So the first cycle is (1 2 5 4 3).
The number 6 goes to position 8, then 8 goes to position 6. So the next cycle is (6 8).
The number 7 stays in position 7, so this is a trivial cycle (7).
The lengths of the cycles are 5, 2 and 1, so the least common multiple is 10. This shuffle takes 10 iterations to get back to the intitial state.
If you don't mind sitting down with pen and paper for a while, you should be able to follow this procedure for your own shuffling algorithm.
I'm learning about mining and the first thing that surprised me is that the nounce part of the algorithm which is supposed to be randomly looped until you get a number smaller than the target hash .. is just 32 bits long.
Can you explain why then is it so difficult to loop an unsigned int and how come is it increasingly difficult over time? Thank you.
The task is: try different nonce values in your potential block until you reach a block having a hash value below some given threshold.
I can't find the source right now, but I'm quite sure that since the introduction of special mining ASICs the 32-bit nonce is no longer enough to keep the miners busy for the planned 10 minutes interval between blocks. They are able to compute 4 billion block hashes in less than 10 minutes.
Increasing the difficulty didn't help anymore, as that reached the point where none of the 4 billion possible nonce values gave a hash below the threshold.
So they found some additional fields in the block that are now used as nonce-extension. The principle is still the same: try different values until you reach a block with a hash below the threshold, only now it's more than 32 bits that can be varied, allowing for the threshold to be lowered beyond the former 32-bit-implied barrier.
Because it's not just the 32bit nonce that is involved in the calculation. The 1MB of transaction data is also part of the mining input. There is then a non-trivial amount of arithmetic to arrive at the output, which then can be compared with the target.
Bitcoin mining is looping over all 4billion uints until you find a "right" one.
The way that difficulty is increased, is that only some of the bits of the output matter. E.g. early on the lowest 11 bits had to be some specific pattern, the remaining 21bits could be anything. In theory there would be 2million "right" values for each transaction block, uniformly distributed across the range of a uint. Then the "difficulty" is increased so that 13 bits have to be some pattern, so now there are 4x fewer "right" answers, so it takes (on average) 4x longer to find one.
I am trying to bucket certain features into groups. The data.frame below (grouped) is my "key" (think Excel vlookup):
Original Grouped
1 Features Constant
2 PhoneService Constant
3 PhoneServices Constant
4 Surcharges Constant
5 CallingPlans Constant
6 Taxes Constant
7 LDUsage Noise
8 RegionalUsage Noise
9 LocalUsage Noise
10 Late fees Noise
11 SpecialServices Noise
12 TFUsage Noise
13 VoipUsage Noise
14 CCUsage Noise
15 Credits Credits
16 OneTime OneTime
I then reference my database which has a column (BillSection) that takes on a specific value from grouped$Original, and I want to group it according to grouped$Grouped. I am using the sapply function to perform this operation. Then I cbind the resulting output to my original data.frame.
grouper<-as.character(sapply(as.character(bill.data$BillSection[1:100]), # for the first 100 records of the data.frame bill.data
function(x)grouped[grouped$Original==x,2])) # take the second column, i.e. Grouped, for the corresponding "TRUE" value in Original
cbind(bill.data[1:100,],as.data.frame(grouper))
The above code works as expected, but it's slow when I apply it to my whole database, which exceeds 10,000,000 unique records. Is there an alternative to this method? I know I can use plyr, but it's even slower (I think) than sapply. I was trying to figure it out with data.table but no luck. Any suggestions would be helpful. I am open to coding this in Python, which I am new to, but heard is much faster than R, since I am dealing with large datasets very often. I wanted to know if R can do this fast enough to be useful.
Thanks!
I'm not sure I understand your question, but can you use merge()? i.e. something like...
merge(big.df, group.names.df, by.x='orginal.column.in.big.df',
by.y='original', all.x=T)
NB. Plyr has a parallel option...
I don't know if this is the right section... but here goes:
Last weeks contest on interviewstreet (Code Sprint 3) had a problem called bowling. (10 pin bowling, N frames). The point is to count the number of ways to score M points by playing N frames.
Problem Statement is here: http://pastebin.com/cyeLML8U
I'm pretty sure I've solved the problem using 2 dimensional DP. However, I get the 3rd sample data wrong (1 Frame, 25 points). The sample answer is 1, however I get 6.
This is their explanation of the sample answer:
For the third case, there is only 1 way. Score a strike in the first frame, score another strike with the first extra ball, and an additional 5 with the second extra ball.
However, can't you score a strike in the first (and only) frame, then score any of the following in the subsequent extra frames?
10 5
9 6
8 7
7 8
6 9
5 10
I can't wrap my head around why "1" is the right answer.... I've looked on wikipedia for the rules too.
Their answer is probably right, and I'm probably overlooking something REALLY obvious. Can anyone tell me what's wrong with my answer?
You cannot get 9 pins with the first extra ball and then 6 pins with the second extra ball because there is only 1 pin left standing when you bowl the second extra ball.
But if you don't get a strike on the second ball, you only have the opportunity to "pick up the spare." That is, you only get 10 pins. So if you get a strike on the first ball and then 9 pins on the second ball, the most you can get on the third ball is 1.
The way I read it, your answer is technically correct, but I don't think the question was asked correctly.
Within the constraints as set out in the link in your question, I can't see what's wrong with your solution. In real life, the pins won't actually be reset unless you've knocked them all down or have bowled twice (or both), so - as others have said - the only way you can score 25 from a 1 ball frame in real life is strike, strike, 5.
Basically, the question didn't give you the correct constraints. I don't think it's valid to say you got the answer wrong, because the question was poorly phrased.