Swift random number problems - xcode

I am using Swift with Xcode 6 Beta 5, and trying to generate a random number between my specific constraints.
Currently I am using the arc4random_uniform() function, which works for simple generating like:
arc4random_uniform(5)
But I am trying to create a random number with some complicated boundaries. This is my attempt:
randomMovement = Int( arc4random_uniform( (0.25 * Int(cs)) - ( 0.175 * rd ) ) ) + Int( 0.175 * rd )
cs and rd are both integers - initially, cs = 100, and rd = 40. This should generate a random number between 7 and 25. However, I run into many problems in Xcode with this, due to the variable types.
With this code, it initially shows the one error - On the first '*', it says:
'Int' is not convertible to 'UInt8'
But when I change that, it comes up with more problems, and it seems like I keep going round in a circle.
What's going wrong, and how can i fix this?

When you have type problems, build up your expression step by step.
import Darwin
let cs = 100
let rd = 40
let min = UInt32(0.175 * Float(rd))
let max = UInt32(0.25 * Float(cs))
// This isn't really necessary, the next line will crash if it's not true, but
// just calling out this implicit assumption
assert(max >= min)
let randomMovement = Int(arc4random_uniform(max - min) + min)
arc4random_uniform takes a UInt32, so you need to get there. You can't multiply Float and Int without a typecast, so you need to add those.
When I say "build up step-by-step" I mean for you, not the compiler. The compiler can handle:
let randomMovement = Int(arc4random_uniform(UInt32(0.25 * Float(cs)) - UInt32(0.175 * Float(rd))) + UInt32(0.25 * Float(cs)))
but it's a bit hard to understand (and may compute cs/4 twice).

You can use the built in random() function in swift:
let randomInt = random() % 5

in Swift 4.2 you can make something like:
let number = Int.random(in: 0 ..< 10)

With Swift 4.2, you can now call directly from Int.random. You can see more details here.

let min = a
let max = b
var x = min + random_uniform(max - min)

Related

envelope function (spatstat) - error "unused arguments"

I would like to ask your help for finding the reason why when I use the function envelope, my arguments are not accepted, but defined "unused arguments".
The data I'm using are ppp without marks and I would like to create a L function graph with simulated data and my data.
Here the code for my ppp data:
map2008MLW = ppp(xy2008_BNGppp$x, xy2008_BNGppp$y, window = IoM_polygon_MLWowin)
And then:
L2008 = Lest(map2008MLW,correction="Ripley")
OP = par(mar=c(5,5,4,4))
plot(L2008, . -r ~ r, ylab=expression(hat("L")), xlab = "d (m)"); par(OP)
L2008$iso = L$iso - L$r
L2008$theo = L$theo - L$r
Desired number of simulations
n = 9999
Desired p significance level to display
p = 0.05
And at this point the envelope function doesnt seem very happy:
EL2008 = envelope(map2008MLW[W], Lest, nsim=n, rank=(p * (n + 1)))
Error in envelope(map2008MLW[W], Lest, nsim = n, rank = (p * (n + 1))) :
unused arguments (nsim = n, rank = (p * (n + 1)))
It seems a generic error and I am not sure it is caused by the package spatstat. Please, help me in finding a solution to this, as I can't proceed with my analyses.
Thank you very much,
Martina
The argument rank should be nrank.
Also the relationship between the significance level and the argument nrank is not correct in the example. For a two-sided test, the significance level is alpha = 2 * nrank/(nsim+1), so nrank= alpha * (nsim+1)/2.
You have chosen a significance level of 0.95 but I assume you mean 0.05. So with nsim=9999 you want nrank=0.05 * 10000/2 = 250 to get a test with significance level 0.05.
Such a large number of simulations (9999) is unnecessary in this kind of application. Monte Carlo tests are valid with small values of nsim. In your example I would normally use nsim=39 and nrank=1.
See Chapter 10 of the spatstat book.

Matthews Correlation Coefficient yielding values outside of [-1,1]

I'm using the formula found on Wikipedia for calculating Matthew's Correlation Coefficient. It works fairly well, most of the time, but I'm running into problems in my tool's implementation, and I'm not seeing the problem.
MCC = ((TP*TN)-(FP*FN))/sqrt(((TP + FP)( TP + FN )( TN + FP )( TN + FN )))
Where TP, TN, FP, and FN are the non-negative, integer counts of the appropriate fields.
Which should only return values $\epsilon$ [-1,1]
My implementation is as follows:
double ret;
if ((TruePositives + FalsePositives) == 0 || (TruePositives + FalseNegatives) == 0 ||
( TrueNegatives + FalsePositives) == 0 || (TrueNegatives + FalseNegatives) == 0)
//To avoid dividing by zero
ret = (double)(TruePositives * TrueNegatives -
FalsePositives * FalseNegatives);
else{
double num = (double)(TruePositives * TrueNegatives -
FalsePositives * FalseNegatives);
double denom = (TruePositives + FalsePositives) *
(TruePositives + FalseNegatives) *
(TrueNegatives + FalsePositives) *
(TrueNegatives + FalseNegatives);
denom = Math.Sqrt(denom);
ret = num / denom;
}
return ret;
When I use this, as I said it works properly most of the time, but for instance if TP=280, TN = 273, FP = 67, and FN = 20, then we get:
MCC = (280*273)-(67*20)/sqrt((347*300*340*293)) = 75100/42196.06= (approx) 1.78
Is this normal behavior of Matthews Correlation Coefficient? I'm a programmer by trade, so statistics aren't a part of my formal training. Also, I've looked at questions with answers, and none of them discuss this behavior. Is it a bug in my code or in the formula itself?
The code is clear and looks correct. (But one's eyes can always deceive.)
One issue is a concern whether the output is guaranteed to lie between -1 and 1. Assuming all inputs are nonnegative, though, we can round the numerator up and the denominator down, thereby overestimating the result, by zeroing out all the "False*" terms, producing
TP*TN / Sqrt(TP*TN*TP*TN) = 1.
The lower limit is obtained similarly by zeroing out all the "True*" terms. Therefore, working code cannot produce a value larger than 1 in size unless it is presented with invalid input.
I therefore recommend placing a guard (such as an Assert statement) to assure the inputs are nonnegative. (Clearly it matters not in the preceding argument whether they are integral.) Place another assertion to check that the output is in the interval [-1,1]. Together, these will detect either or both of (a) invalid inputs or (b) an error in the calculation.

Swift rand() not being random

Today is my first day with Swift, and I have run into a problem. I am using rand to generate a random number, but it is giving me the same results every time I run the code.
main.swift:
import Foundation
var player = Player()
for _ in 1..6 {
println(player.kick())
}
player.swift:
import Foundation
class Player {
var health = 25
var xp = 15
var upgrades = ["kick": 0, "punch": 0]
func kick() -> Int {
let range = (3, 7)
let damage = Int(rand()) % (range.1 - range.0) + range.0 + 1
return damage
}
func punch() -> Int {
let range = (4, 6)
let damage = Int(rand()) % (range.1 - range.0) + range.0 + 1
return damage
}
}
Every time I run the code, it logs these numbers:
7
5
5
6
6
I also tried this: Int(arc4random(range.1 - range.0)) + range.0 + 1 but it said it couldn't find an overload for + that accepts the supplied arguments
I have no idea why this would be happening. I'd appreciate some help, thanks!
You should never use rand(), use arc4random - it's a much better generator. If you check its man-pages, you'll find that it has an integer range generator form called arc4random_uniform(), which you should use to avoid modulo bias when the modulus is not a power of 2. I believe the following is what you want, it worked for me in playground:
let damage = arc4random_uniform(UInt32(range.1 - range.0) + 1) + UInt32(range.0)
The + 1 is because the upper end of arc4random_uniform() is non-inclusive. If your range is (4,7), this should give occurrences of 4, 5, 6, and 7.
rand() in most programming environments gives you a repeatable sequence of pseudo-random numbers, by design. Look for a function called seed or srand for ways to initialize the random number generator.
Using rand() is fine, you can seed the pseudo-random number generator with this call at the beginning of your program:
srand(UInt32(time(nil)))

Better random "feeling" integer generator for short sequences

I'm trying to figure out a way to create random numbers that "feel" random over short sequences. This is for a quiz game, where there are four possible choices, and the software needs to pick one of the four spots in which to put the correct answer before filling in the other three with distractors.
Obviously, arc4random % 4 will create more than sufficiently random results over a long sequence, but in a short sequence its entirely possible (and a frequent occurrence!) to have five or six of the same number come back in a row. This is what I'm aiming to avoid.
I also don't want to simply say "never pick the same square twice," because that results in only three possible answers for every question but the first. Currently I'm doing something like this:
bool acceptable = NO;
do {
currentAnswer = arc4random() % 4;
if (currentAnswer == lastAnswer) {
if (arc4random() % 4 == 0) {
acceptable = YES;
}
} else {
acceptable = YES;
}
} while (!acceptable);
Is there a better solution to this that I'm overlooking?
If your question was how to compute currentAnswer using your example's probabilities non-iteratively, Guffa has your answer.
If the question is how to avoid random-clustering without violating equiprobability and you know the upper bound of the length of the list, then consider the following algorithm which is kind of like un-sorting:
from random import randrange
# randrange(a, b) yields a <= N < b
def decluster():
for i in range(seq_len):
j = (i + 1) % seq_len
if seq[i] == seq[j]:
i_swap = randrange(i, seq_len) # is best lower bound 0, i, j?
if seq[j] != seq[i_swap]:
print 'swap', j, i_swap, (seq[j], seq[i_swap])
seq[j], seq[i_swap] = seq[i_swap], seq[j]
seq_len = 20
seq = [randrange(1, 5) for _ in range(seq_len)]; print seq
decluster(); print seq
decluster(); print seq
where any relation to actual working Python code is purely coincidental. I'm pretty sure the prior-probabilities are maintained, and it does seem break clusters (and occasionally adds some). But I'm pretty sleepy so this is for amusement purposes only.
You populate an array of outcomes, then shuffle it, then assign them in that order.
So for just 8 questions:
answer_slots = [0,0,1,1,2,2,3,3]
shuffle(answer_slots)
print answer_slots
[1,3,2,1,0,2,3,0]
To reduce the probability for a repeated number by 25%, you can pick a random number between 0 and 3.75, and then rotate it so that the 0.75 ends up at the previous answer.
To avoid using floating point values, you can multiply the factors by four:
Pseudo code (where / is an integer division):
currentAnswer = ((random(0..14) + lastAnswer * 4) % 16) / 4
Set up a weighted array. Lets say the last value was a 2. Make an array like this:
array = [0,0,0,0,1,1,1,1,2,3,3,3,3];
Then pick a number in the array.
newValue = array[arc4random() % 13];
Now switch to using math instead of an array.
newValue = ( ( ( arc4random() % 13 ) / 4 ) + 1 + oldValue ) % 4;
For P possibilities and a weight 0<W<=1 use:
newValue = ( ( ( arc4random() % (P/W-P(1-W)) ) * W ) + 1 + oldValue ) % P;
For P=4 and W=1/4, (P/W-P(1-W)) = 13. This says the last value will be 1/4 as likely as other values.
If you completely eliminate the most recent answer it will be just as noticeable as the most recent answer showing up too often. I do not know what weight will feel right to you, but 1/4 is a good starting point.

Smoothly make a number approach zero

I have a floating point value X which is animated. When in rest it's at zero, but at times an outside source may change it to somewhere between -1 and 1.
If that happens I want it to go smoothly back to 0. I currently do something like
addToXspeed(-x * FACTOR);
// below is out of my control
function addToXspeed(bla) {
xspeed += bla;
x += xspeed;
}
every step in the animation, but that only causes X to oscillate. I want it to rest on 0 however.
(I've explained the problem in abstracts. The specific thing I'm trying to do is make a jumping game character balance himself upright in the air by applying rotational force)
Interesting problem.
What you are asking for is the stabilization of the following discrete-time linear system:
| x(t+1)| = | 1 dt | | x(t)| + | 0 | u(t)
|xspeed(t+1)| | 0 1 | |xspeed(t)| | 1 |
where dt is the sampling time and u(t) is the quantity you addToXspeed(). (Further, the system is subject to random disturbances on the first variable x, which I don't show in the equation above.) Now if you "set the control input equal to a linear feedback of the state", i.e.
u(t) = [a b] | x(t)| = a*x(t) + b*xspeed(t)
|xspeed(t)|
then the "closed-loop" system becomes
| x(t+1)| = | 1 dt | | x(t)|
|xspeed(t+1)| | a b+1 | |xspeed(t)|
Now, in order to obtain "asymptotic stability" of the system, we stipulate that the eigenvalues of the closed-loop matrix are placed "inside the complex unit circle", and we do this by tuning a and b. We place the eigenvalues, say, at 0.5.
Therefore the characteristic polynomial of the closed-loop matrix, which is
(s - 1)(s - (b+1)) - a*dt = s^2 -(2+b)*s + (b+1-a*dt)
should equal
(s - 0.5)^2 = s^2 - s + 0.25
This is easily attained if we choose
b = -1 a = -0.25/dt
or
u(t) = a*x(t) + b*xspeed(t) = -(0.25/dt)*x(t) - xspeed(t)
addToXspeed(u(t))
which is more or less what appears in your own answer
targetxspeed = -x * FACTOR;
addToXspeed(targetxspeed - xspeed);
where, if we are asked to place the eigenvalues at 0.5, we should set FACTOR = (0.25/dt).
x = x*FACTOR
This should do the trick when factor is between 0 and 1.
The lower the factor the quicker you'll go to 0.
Why don't you define a fixed step to be decremented from x?
You just have to be sure to make it small enough so that the said person doesn't seem to be traveling at small bursts at a time, but not small enough that she doesn't move at a perceived rate.
Writing the question oftens results in realising the answer.
targetxspeed = -x * FACTOR;
addToXspeed(targetxspeed - xspeed);
// below is out of my control
function addToXspeed(bla) {
xspeed += bla;
x += xspeed;
}
So simple too
If you want to scale it but can only add, then you have to figure out which value to add in order to get the desired scaling:
Let's say x = 0.543, and we want to cause it to rapidly go towards 0, i.e. by dropping it by 95%.
We want to do:
scaled_x = x * (1.0 - 0.95);
This would leave x at 0.543 * 0.05, or 0.02715. The difference between this value and the original is then what you need to add to get this value:
delta = scaled_x - x;
This would make delta equal -0,51585, which is what you need to add to simulate a scaling by 5%.

Resources