Why to add 0 while using nextInt() - random

I am following an instructor creating tic tac toy game, to make it autoplay.
var r = Random()
val randInt = r.nextInt(emptyCell.size-0) + 0 // adding 0 here
why do we need to add +0 here?

There's no reason why you'd have to write down + 0 in that case. nextInt returns an Int, so adding 0 as an Int to it does absolutely nothing - doesn't change the type or affect the value - as you'd expect.
Probably a typo in the tutorial.

It's a billet for changing a value if you wish to. Author just showed you where and how to put it to.
Here's how your code should look like:
var random = Random()
var randomIndex: Int?
randomIndex = random.nextInt(emptyCell.size - 1) + 2 // two values instead of 00
println("randomIndex $randomIndex")
val emptyCellId = emptyCell[randomIndex]
println("emptyCellId $emptyCellId")
var btnSelect: Button?
btnSelect = setButtonId(noOfCards, emptyCellId)

Adding 0 will work but it does not change anything.
Note that you are using Java's java.util.Random which would limit your code to the JVM.
If you use kotlin.random.Random your code will target all platforms that Kotlin does and would be simpler because you don't need to instantiate a class.
You can use it like this:
val randInt = Random.nextInt(emptyCell.size)
Check out the other variants of nextInt if you don't need to specify bonds or you need to specify an upper bound.

Related

why Golang handle cutoff in atoi.go like this?

// code in atoi.go, line 90
var cutoff uint64
switch base {
case 10:
cutoff = maxUint64/10 + 1
case 16:
cutoff = maxUint64/16 + 1
default:
cutoff = maxUint64/uint64(base) + 1
}
I saw some code in file atoi.go of Golang package, why not write it like below?
var cutoff = maxUint64/uint64(base) + 1
Thanks a lot.
I think the comment above the line you are referring to may answer your question:
// Use compile-time constants for common cases.
Because maxUint64/10 + 1 and maxUint64/16 + 1 only reference constants the compiler can calculate this. The result is that there is no need to perform the division operation at runtime every time ParseUint is called. You can see the benchmarks in the commit.

Is there a way to use range with Z3ints in z3py?

I'm relatively new to Z3 and experimenting with it in python. I've coded a program which returns the order in which different actions is performed, represented with a number. Z3 returns an integer representing the second the action starts.
Now I want to look at the model and see if there is an instance of time where nothing happens. To do this I made a list with only 0's and I want to change the index at the times where each action is being executed, to 1. For instance, if an action start at the 5th second and takes 8 seconds to be executed, the index 5 to 12 would be set to 1. Doing this with all the actions and then look for 0's in the list would hopefully give me the instances where nothing happens.
The problem is: I would like to write something like this for coding the problem
list_for_check = [0]*total_time
m = s.model()
for action in actions:
for index in range(m.evaluate(action.number) , m.evaluate(action.number) + action.time_it_takes):
list_for_check[index] = 1
But I get the error:
'IntNumRef' object cannot be interpreted as an integer
I've understood that Z3 isn't returning normal ints or bools in their models, but writing
if m.evaluate(action.boolean):
works, so I'm assuming the if is overwritten in a way, but this doesn't seem to be the case with range. So my question is: Is there a way to use range with Z3 ints? Or is there another way to do this?
The problem might also be that action.time_it_takes is an integer and adding a Z3int with a "normal" int doesn't work. (Done in the second part of the range).
I've also tried using int(m.evaluate(action.number)), but it doesn't work.
Thanks in advance :)
When you call evaluate it returns an IntNumRef, which is an internal z3 representation of an integer number inside z3. You need to call as_long() method of it to convert it to a Python number. Here's an example:
from z3 import *
s = Solver()
a = Int('a')
s.add(a > 4);
s.add(a < 7);
if s.check() == sat:
m = s.model()
print("a is %s" % m.evaluate(a))
print("Iterating from a to a+5:")
av = m.evaluate(a).as_long()
for index in range(av, av + 5):
print(index)
When I run this, I get:
a is 5
Iterating from a to a+5:
5
6
7
8
9
which is exactly what you're trying to achieve.
The method as_long() is defined here. Note that there are similar conversion functions from bit-vectors and rationals as well. You can search the z3py api using the interface at: https://z3prover.github.io/api/html/namespacez3py.html

Range of doubles in Swift

I am currently writing a Swift application and parts of it require making sure certain user inputs add up to a specified value.
A simplified example:
Through program interaction, the user has specified that totalValue = 67 and that turns = 2. This means that in two inputs, the user will have to provide two values that add up to 67.
So lets say on turn 1 the user enters 32, and then on turn 2 he enters 35, this would be valid because 32 + 35 = 67.
This all works fine, but the moment we verge into more than one decimal place, the program cannot add the numbers correctly. For example, if totalValue = 67 and then on turn 1 the user enters 66.95 and then on turn 2 he enters .05 the program will return that this is an error despite the fact that
66.95 + .05 = 67. This problem does not happen with one decimal place or less (something like turn 1 = 55.5 and turn 2 = 11.5 works fine), only for two decimal spots and beyond. I am storing the values as doubles. Thanks in advance
Some example code:
var totalWeights = 67
var input = Double(myTextField.text.bridgeToObjectiveC().doubleValue)
/*Each turn is for a button click*/
/*For turn 1*/
if inputValid == true && turn == 1 && input < totalWeights
{
myArray[0] = input
}
else
{
//show error string
}
/*For turn 2*/
if inputValid == true && turn == 2 && input == (totalWeights - myArray[0])
{
myArray[1] = input
}
else
{
//show error string
}
If you want exact values from floating point then the float/double types will not work, as they are only ever approximations of exact numbers. Look into using the NSDecimalNumber class from within Swift, I'm not sure what the bridging would look like but it should be simple.
Here is an example of how this could work:
var a = 0
for num in numlist {
a += num
}
var result = false
if a == targetnum
result = true
I haven't tested this out, but if numlist is an array of double then it should work for any input that is a valid number.
One problem I just realized is that there is an issue with doing an equals with doubles, as rounding will cause problems for you. I am not going to show it, but if, while reading in the inputs you keep track of how many numbers to the right of the decimal place, then multiply all of the values by that number of tens, so 66.95 * 100 to get it all as an integer, then add, then do the comparison, after multiplying the targetnum by the same value (100).
Unfortunately there is no ideal solution to this. We must use approximation type comparison.
For example, instead of checking:
if val1 == val2
we must try something like:
if val1 > (val2 - .0005) && val1 < (val2 + .0005)

Conditional Count inside of Group in .rdlc?

I have a .rdlc report, grouped.
Inside each group, I have an Id. Some of them will be positives, and others will be negative.
I need the difference between de quantity of positives Id's and negatives Id's
Something like
=CountDistinct(Fields!Id.Value) where Fields!Id.Value > 0 - CountDistinct(Fields!Id.Value) where Fields!Id.Value < 0
How Can I do that ? I'm thinking on a function, but I want to know if there is a simply way
Edit: An Id can be more than once time in each group, that's why I use CountDistinct
You can try this:
CountDistinct(IIf(Fields!Id.Value > 0, Fields!Id.Value, Nothing))
create 2 global variables. one for positive and one for negative.
Then create a new formula that counts them like the following:
WhilePrintingRecords;
IF (GroupName ({your_group_name}) > 0) THEN
Positive var = Positive var + 1;
ELSE
Negative var = Negative var + 1;
You can actually look for your group in the formulas and drag it to the editor while writing the formula.
Since its a operation in group level, the records should be read first. Thats why we use whilePrintingRecords rather than whileReadingRecords.
Hope I understood your question right.

Populating a list is Scala with random double taking forever

I am new to Scala and am trying to get a list of random double values:
The thing is, when I try to run this, it takes way too long compared to its Java counterpart. Any ideas on why this is or a suggestion on a more efficient approach?
def random: Double = java.lang.Math.random()
var f = List(0.0)
for (i <- 1 to 200000)
( f = f ::: List(random*100))
f = f.tail
You can also achieve it like this:
List.fill(200000)(math.random)
the same goes for e.g. Array ...
Array.fill(200000)(math.random)
etc ...
You could construct an infinite stream of random doubles:
def randomList(): Stream[Double] = Stream.cons(math.random, randomList)
val f = randomList().take(200000)
This will leverage lazy evaluation so you won't calculate a value until you actually need it. Even evaluating all 200,000 will be fast though. As an added bonus, f no longer needs to be a var.
Another possibility is:
val it = Iterator.continually(math.random)
it.take(200000).toList
Stream also has a continually method if you prefer.
First of all, it is not taking longer than java because there is no java counterpart. Java does not have an immutable list. If it did, performance would be about the same.
Second, its taking a lot of time because appending lists have linear performance, so the whole thing has quadratic performance.
Instead of appending, prepend, which had constant performance.
if your using mutable state anyways you should use a mutable collection like buffer which you can add too with += (which then would be the real counterpart to java code).
but why dont u use list comprehension?
val f = for (_ <- 1 to 200000) yield (math.random * 100)
by the way: var f = List(0.0) ... f = f.tail can be replaced by var f: List[Double] = Nil in your example. (no more performance but more beauty ;)
Yet more options! Tail recursion:
def randlist(n: Int, part: List[Double] = Nil): List[Double] = {
if (n<=0) part
else randlist(n-1, 100*random :: part)
}
or mapped ranges:
(1 to 200000).map(_ => 100*random).toList
Looks like you want to use Vector instead of List. List has O(1) prepend, Vector has O(1) append. Since you are appending, but using concatenation, it'll be faster to use Vector:
def random: Double = java.lang.Math.random()
var f: Vector[Double] = Vector()
for (i <- 1 to 200000)
f = f :+ (random*100)
Got it?

Resources