I'm trying to write something that simulates the Martingale betting system. If you're not familiar with this, it's a "sure thing!" (not a sure thing) betting system for coin-toss games where you double your bet each time you lose, hoping to win back all your lost money upon the first win.
So your bets would go $10 -> loss -> $20 -> loss -> $40 -> loss -> $80 -> win! -> $10...
Simple, right? I figure the logic will be:
Have a wallet variable that starts at $1,000.
Make a bet.
Flip a coin with rand(0..1). 0 will be a loss and 1 a win.
If I win, add the bet to my wallet. If I lose, subtract the bet from my wallet, and then issue a new bet for twice the previous.
I write this as:
def flip(bet)
if rand(0..1) == 0 then
$balance += bet
else
$balance -= bet
flip(bet*2)
end
end
Then I run flip(10) a thousand times just to see how effective this betting system is.
The problem is that I always get the exact same results. I'll run the program ten times, and the first five results will always be 1010, 1020, 1030, 1040, 1050... So something's wrong. But I can't really see what; the logic seems fine to me.
Just to test things out, I removed the recursive call, the line flip(bet*2). Instead, I just ran a thousand regular bets. And that behaves the way you'd expect, different results every time.
So what's going on here?
Looking at the logic it looks as if it will recursively bet until you win. So it looks like your balance is going up by 10 every time, hence the "1010, 1020, 1030, 1040, 1050".
If you put a puts $balance before the flip(bet*2) line you can see the balance going up and down.
I guess that's the point of the betting system. I don't think there is anything wrong with the random part of the method.
Your result is exactly what you expect with "sure thing" betting, because you allow $balance to go negative, so the better is not limited in any way (effectively they have infinite resources). The strategy will always exit $10 up on last balance, due to losing e.g. 10,20,40 dollars, then adding 80. Because you allow negative balance, the better is allowed to continue this - whilst a model could notice if they lost 6 games in a row (1 in 64 chance), then they would be down to $370, and not able to make the next bet at $640.
Add something to catch running out of money, and you should see a difference in how many bets it will take before that happens, or what the losing value of $balance is (i.e. you can demonstrate this way that the "sure thing" strategy is flawed - because for every 63 wins of $10, there is a single loss of $630 to perfectly balance it)
Related
I'm running some lattice proofs through Prover9/Mace4. Prover9's saying Exit: Time limit. plus the message in the Title.
I've doubled the time limit from 60 to 120 seconds. Same message (in twice the time). The weird thing is:
there's only one statement to prove. That is, only one label(goal) in the report (what's with the but not all?)
it does seem to have completed the proof, in that it shows last line $F.
Mace4 can't find any counter-examples (I upped its time to 120 seconds).
I've found some GHits for that message, but they seem to be all in Chinese(?)
It's possible the axioms I've given are (mutually) recursive -- I'm trying to introduce a function and a nominated 'absorbing element' [**]; and that solving will need infinitary unification. Does Prover9 do that?
I'm happy to add the axioms and goal to this message. (I'm using a non-standard way to define the meet and join.) But first, are there any sanity checks I should go through?
[**] the absorbing element is neither lattice top nor lattice bottom; more like lattice left-corner. (The element will be lattice bottom just in case the lattice degenerates to two elements.) The function is a partial ordering 'at right angles' to top/bottom. The lattice I expect to be neither complemented nor distributive (again except when 2 elements).
I've reproduced this after much trying, but only by setting some strange option that I'm sure I wouldn't have touched. (The only option I usually change is the Time limit, and I Reset to defaults quite often, so that would have blatted any evidence.)
Here's my guess for what happened.
what's with the but not all?
You can enter multiple goals (providing they're all positive). [**]
With strange option settings, if Prover9 can prove the first but not the second, it'll keep trying until exhausted; but then only report the successful one -- with a $F. result OK.
If you double the Time limit, it'll still prove the first and still keep on trying for the second -- taking twice the time for the same outcome.
Mace4 will come across the first goal, and use up its time trying for a counter-example. There isn't one because it's provable. Again, doubling its Time limit will get the same outcome after twice as long.
[Note **] It's never that I intend to set multiple goals; but when I'm hacking/experimenting with axioms, I keep all the goals in the Goals: box so I can easily toggle un/comment. I guess I didn't comment-out one when I was uncommenting another.
The behaviour usually, as described in the manual, is Prover9 reports success at the first goal it proves; doesn't go on to other goals. If there's multiple provable goals, it seems to choose the easiest/quickest(?) irrespective of position in the file.
But with max_proofs set to more than default 1, Prover9 will keep trying. (There's also a auto_denials flag that has something to do with it I don't understand.)
I've no idea how I set max_proofs -- I didn't recognise the Options/Limits sub-screen when I eventually found it. Weird.
I was going through Google Interview Questions. to implement the random number generation from 1 to 7.
I did write a simple code, I would like to understand if in the interview this question asked to me and if I write the below code is it Acceptable or not?
import time
def generate_rand():
ret = str(time.time()) # time in second like, 12345.1234
ret = int(ret[-1])
if ret == 0 or ret == 1:
return 1
elif ret > 7:
ret = ret - 7
return ret
return ret
while 1:
print(generate_rand())
time.sleep(1) # Just to see the output in the STDOUT
(Since the question seems to ask for analysis of issues in the code and not a solution, I am not providing one. )
The answer is unacceptable because:
You need to wait for a second for each random number. Many applications need a few hundred at a time. (If the sleep is just for convenience, note that even a microsecond granularity will not yield true random numbers as the last microsecond will be monotonically increasing until 10us are reached. You may get more than a few calls done in a span of 10us and there will be a set of monotonically increasing pseudo-random numbers).
Random numbers have uniform distribution. Each element should have the same probability in theory. In this case, you skew 1 more (twice the probability for 0, 1) and 7 more (thrice the probability for 7, 8, 9) compared to the others in the range 2-6.
Typically answers to this sort of a question will try to get a large range of numbers and distribute the ranges evenly from 1-7. For example, the above method would have worked fine if u had wanted randomness from 1-5 as 10 is evenly divisible by 5. Note that this will only solve (2) above.
For (1), there are other sources of randomness, such as /dev/random on a Linux OS.
You haven't really specified the constraints of the problem you're trying to solve, but if it's from a collection of interview questions it seems likely that it might be something like this.
In any case, the answer shown would not be acceptable for the following reasons:
The distribution of the results is not uniform, even if the samples you read from time.time() are uniform.
The results from time.time() will probably not be uniform. The result depends on the time at which you make the call, and if your calls are not uniformly distributed in time then the results will probably not be uniformly distributed either. In the worst case, if you're trying to randomise an array on a very fast processor then you might complete the entire operation before the time changes, so the whole array would be filled with the same value. Or at least large chunks of it would be.
The changes to the random value are highly predictable and can be inferred from the speed at which your program runs. In the very-fast-computer case you'll get a bunch of x followed by a bunch of x+1, but even if the computer is much slower or the clock is more precise, you're likely to get aliasing patterns which behave in a similarly predictable way.
Since you take the time value in decimal, it's likely that the least significant digit doesn't visit all possible values uniformly. It's most likely a conversion from binary to some arbitrary number of decimal digits, and the distribution of the least significant digit can be quite uneven when that happens.
The code should be much simpler. It's a complicated solution with many special cases, which reflects a piecemeal approach to the problem rather than an understanding of the relevant principles. An ideal solution would make the behaviour self-evident without having to consider each case individually.
The last one would probably end the interview, I'm afraid. Perhaps not if you could tell a good story about how you got there.
You need to understand the pigeonhole principle to begin to develop a solution. It looks like you're reducing the time to its least significant decimal digit for possible values 0 to 9. Legal results are 1 to 7. If you have seven pigeonholes and ten pigeons then you can start by putting your first seven pigeons into one hole each, but then you have three pigeons left. There's nowhere that you can put the remaining three pigeons (provided you only use whole pigeons) such that every hole has the same number of pigeons.
The problem is that if you pick a pigeon at random and ask what hole it's in, the answer is more likely to be a hole with two pigeons than a hole with one. This is what's called "non-uniform", and it causes all sorts of problems, depending on what you need your random numbers for.
You would either need to figure out how to ensure that all holes are filled equally, or you would have to come up with an explanation for why it doesn't matter.
Typically the "doesn't matter" answer is that each hole has either a million or a million and one pigeons in it, and for the scale of problem you're working with the bias would be undetectable.
Using the same general architecture you've created, I would do something like this:
import time
def generate_rand():
ret = str(time.time()) # time in second like, 12345.1234
ret = ret % 8 # will return pseudorandom numbers 0-7
if ret == 0:
return 1 # or you could also return the result of another call to generate_rand()
return ret
while 1:
print(generate_rand())
time.sleep(1)
I'm working on a Tic Tac Toe and I have a #negamax method that returns the best position to computer to move to, and a #winner method that returns 1 (computer wins) or -1 (user wins). How can I test #negamax so that I guarantee that its implementation is right and that user never wins?
I have a few test cases in places, to test that it returns the best position, and it does, but it does not cover all possible cases. Right now, this is what I have (besides the test cases for the best choice):
it 'never allows user to win' do
until game_over?
unless is_ai?
pos = empty_positions.sample
move(pos, user)
else
pos = negamax(0, 1, -100, 100)
move(pos, computer)
end
end
if game.won?
expect(winner).to eq(1)
else
expect(winner).to be_nil
end
end
It does not seem very effective to just 'hope' that the test will never fail. What would be a better way to accomplish it?
but it does not cover all possible cases.
Don't worry, this is normal, it's nearly impossible to simulate all the ways an application will be used. Testing some things can lead to huge increases in results. Testing “everything” is a waste of time. That’s because “everything” doesn’t matter. Only certain things matter. The right things matter.
If I have code that will take a while to execute, printing out results every iteration will slow down the program a lot. To still receive occasional output to check on the progress of the code, I might have:
if (i % 10000 == 0) {
# print progress here
}
Does the if statement checking every time slow it down at all? Should I just not put output and just wait, will that make it noticeably faster at all?
Also, is it faster to do: (i % 10000 == 0) or (i == 10000)?
Is checking equality or modulus faster?
In general case, it won't matter at all.
A slightly longer answer: It won't matter unless the loop is run millions of times and the other statement in it is actually less demanding than an if statement (for example, a simple multiplication etc.). In that case, you might see a slight performance drop.
Regarding (i % 10000 == 0) vs. (i == 10000), the latter is obviously faster, because it only compares, whereas the former possibility does a (fairly costly) modulus and a comparison.
That said, both an if statement and a modulus count won't make any difference if your loop doesn't take up 90 % of the program's running time. Which usually is the case only at school :). You probably spent a lot more time by asking this question than you would have saved by not printing anything. For development and debugging, this is not a bad way to go.
The golden rule for this kind of decisions:
Write the most readable and explicit code you can imagine to do the
thing you want it to do. If you have a performance problem, look at
wrong data structures and algorithmic choices first. If you have done
all those and need a really quick program, profile it to see which
part takes most time. After all those, you're allowed to do this kind
of low-level guesses.
So lets say there's a game that has a 'life bar' that consists of theoretical levels. As the user performs specific actions, depending on accuracy of their actions, the life bar grows at a corresponding speed. As it grows and goes into next levels, the criteria for desirable actions change and so the user now has to figure out what those new actions are, to keep the bar growing instead of shrinking. And while the user tries to learn which actions/patterns result in growth, things like 'time' along with undesirable actions slowly bring them back down.
I'm wondering if anyone knows of any open-source games that may have similar logic.
Or perhaps if there's a name for this type of logic so I can try and find some algorithms that may help me set something like this up.
TIA
-added
As it seems there's probably no technical term for something like this, perhaps someone can suggest some pseudo top level logic. I've never built a game before and would like to raise my chances of heading in the optimum direction.
That sounds suspiciously like my Stack Overflow reputation score.
For the purpose of this code, let's pretend that the bar holds the score for the player.
Score = Max score that can be received from the action without modifier
Accuracy = [0..1] Where 0 is total miss on the action and 1 is a perfect hit.
Example: The score for a headshot
LevelModifier = [0..1] Where 0 means that in this level, it doesn't give any
scores and 1 means that the player receives the max bonus.
You can also refer to this as a difficulty modifier.
The higher the level, the more bonus you get.
ScoreDelta = (Score * Accuracy) * LevelModifier
ScoreBar += ScoreDelta
For the timer, you can lower their ScoreBar every second.
ScoreBar -= TimePenalty
For gameplay reasons, you can reset the timer whenever the player does an action. This would reward players who kept moving.
It sounds like you're trying to model karma... I think there's a few web sites that have karma-like systems (SO's rep system is arguably something like that).
I'd start with something simple... If the user does "good" things, it goes up. If they do bad things it goes down. If they do nothing (sloth?), it goes down slowly.
That sounds a lot like an experience bar.
It sounds like you would be best-served by using a state machine.
State A:
* Walk Forward : ++Points
* Jump : Points += 100
* Points < 100 : Go to State A
* Points > 100 : Points = 0; Go to State B
* Points > 150 : Points = 0; Go to State C
State B:
* Kill Bad Guy : ++Points
* Get Hurt : --Points
* Points < -50 : Points = 0; Go to State A
* Points < 100 : Go to State C
* Points > 100 : Points = 0; Go to State D
...etc...
That 'Points == 150' condition is just something I made up to demonstrate the power of the state machine. If the player does something especially good to jump from less than 100 to above 150, then he gets to skip a level. You could even have bonus levels that are only accessible in this way.
Edit: Wow, I got so engrossed in my typing, that I kinda forgot what the initial problem was. Hopefully my answer makes more sense now.
(I think most of the other answerers are interpreting your description as logarithmic growth.)
To be honest, the best way to really do this might just be trial and error. Invent a formula, not too complicated. Play with it in game. If it feels a bit chunky or stiff or floppy or whatever, adjust it, add terms, just experiment.
eventually, it will feel aesthetically pleasing. Thats what you want. It should feel like the response of the health bar follows the effort you're putting in.
Also, just by writing the game, you'll know it pretty well. Be sure and give your friends/coworkers/any random victim, a chance to try it too and determine if they feel it is aesthetically right as you do.