How to make a traditional for-loop in Kotlin - for-loop

JavaScript
for (var x = 0; x < 360; x += 0.5)
{
// ...
}
How do I do this in Kotlin?
Note that my step size is a floating point and so a regular range won't work:
for (x in 0.0 until 360.0 step 0.5) {
// ...
}
I also need to exclude the ending value, hence why I'm using until.
I will resort to a while loop for now:
var x = 0.0;
while (x < 360.0) {
// ...
x += 0.5
}

There isn't a way to do this right now in Kotlin because Kotlin does not have "traditional" for loops. I believe you're right in choosing a while loop. In fact, traditional for loops are just while loops in disguise:
for (init; condition; post;) {
// do something
}
can always be rewritten,
init
while (condition) {
// do something
post
}
with no change in behavior, because the init statement will always execute and the condition will always be checked before the loop runs even once. One thing this construct can't give you is a variable that's only scoped to this block. If you're really after that behavior, the following would work, though it's not very idiomatic.
for (x in generateSequence(0.0) { it + 0.5 }.takeWhile { it < 360.0}) {
println(x)
}
If you're using a Sequence, you might also be interested in the more idiomatic forEach:
generateSequence(0.0) { it + 0.5 }.takeWhile { it < 360.0 }.forEach { x ->
// do something
}

As repeated addition is just a shorthand for multiplication you can look at for (x in 0.0 until 360.0 step 0.5) as "do something for as many times as 0.5 fits in 360". Therefore it would make sense to express it like this:
val times = (360 / 0.5).toInt()
repeat(times){
}
of cause you can also inline times to
repeat(times = (360 / 0.5).toInt()){
}
The advantage is that you skip the sequence generation. Con -- you lose the access to your counting variable x.
see also https://kotlinlang.org/api/latest/jvm/stdlib/kotlin/repeat.html

I agree with Answer by #AdamYakes.
Since the until operator is not available for Floating value type, you can not use
it for floating step value.
However, If you still want to use
for (x in 0.0 until 360.0 step 0.5) {
// use value
}
you can do that as following:
for (x in 0 until 3600 step 5) {
// use (value / 10)
}

Kotlin is so flexible that one could hack a custom solution:
private infix fun (((Double) -> Unit, Double) -> Unit).step(step: Double) =
fun(action: (Double) -> Unit) = this(action, step)
private infix fun Double.upto(to: Double) =
fun(action: (Double) -> Unit, step: Double) {
var v = this
while (v < to) {
action(v)
v += step
}
}
fun main() {
// Usage:
(0.0 upto 360.0 step 0.5) {
println(it)
}
}
What I did above is to create two infix functions.
The upto() infix function can be used with two Doubles: it takes 0.0 upto 360.0 and returns a function that expects two arguments: an action function and the step double value.
The step infix function can be used with functions returned by upto() combined with a Double (so f step 0.5). It returns a function that calls the upto() function passing the step value and the user's { ... } block that gets called for each iteration.
Short and convoluted.
A simpler alternative:
inline fun doubleFor(
from: Double, to: Double, step: Double, action: (Double) -> Unit
) {
var v = from
while (v < to) {
action(v)
v += step
}
}
fun main() {
// Usage:
doubleFor(0.0, 360.0, 0.5) {
println(it)
}
}
A good IDE would show the argument names next to the values making the meaning of the arguments in the usage code obvious.
A third approach can be found at https://stackoverflow.com/a/44332139/2933899 which allows one to write for(x in 0.0 .. 360.0 step 0.5) { println(x) } with the inconvenience that the top boundary value is included.
Run these at https://pl.kotl.in/ZuR354Fas

Related

Mata error 3204

I am unsure why I am getting an error.
I think it may stem from a misunderstanding around the structure syntax, but I am not certain if this is the issue (it would be unsurprising if there are multiple issues).
I am emulating code (from William Gould's The Mata Book) in which the input is a scalar, but the input for the program I am writing is a colvector.
The objective of this exercise is to create a square matrix from a column vector (according to some rules) and once created, multiply this square matrix by itself.
The code is the following:
*! spatial_lag version 1.0.0
version 15
set matastrict on
//--------------------------------------------------------------
local SL struct laginfo
local RS real scalar
local RC real colvector
local RM real matrix
//--------------------------------------------------------------
mata
`SL'
{
//-------------------inputs:
`RC' v
//-------------------derived:
`RM' W
`RM' W2
`RS' n
}
void lagset(`RC' v)
{
`SL' scalar r
// Input:
r.v = v
//I set the derived variables to missing:
r.W = .z
r.W2 = .z
r.n = .z // length of vector V
}
`RM' w_mat(`SL' scalar r)
{
if (r.W == .z) {
real scalar row, i
real scalar col, j
r.W = J(r.n,r.n,0)
for (i=1; i<=r.n; i++) {
for (i=1; i<=r.n; i++) {
if (j!=i) {
if (r.v[j]==r.v[i]) {
r.W[i,j] = 1
}
}
}
}
}
return(r.W)
}
`RS' wlength(`SL' scalar r)
{
if (r.n == .z) {
r.n = length(r.v)
}
return(r.n)
}
`RM' w2mat(`SL' scalar r)
{
if (r.W2 == .z) {
r.W2 = r.W * r.W
}
return(r.W2)
}
end
This compiles without a problem, but it give an error when I attempt to use it interactively as follows:
y=(1\1\1\2\2\2)
q = lagset(y)
w_mat(q)
w2mat(q)
The first two lines run fine, but when I run the last two of those lines, I get:
w_mat(): 3204 q[0,0] found where scalar required
<istmt>: - function returned error
What am I misunderstanding?
This particular error is unrelated to structures. Stata simply complains because the lagset() function is void. That is, it does not return anything. Thus, q ends up being empty, which is in turn used as input in the function w_mat() inappropriately - hence the q[0,0] reference.

How to add or subtract two enum values in swift

So I have this enum that defines different view positions on a View controller when a side bar menu is presented. I need to add, subtract, multiply, or divide the different values based on different situations. How exactly do I form a method to allow me to use -, +, *, or / operators on the values in the enum. I can find plenty examples that use the compare operator ==. Although I haven't been able to find any that use >=. Which I also need to be able to do.
Here is the enum
enum FrontViewPosition: Int {
case None
case LeftSideMostRemoved
case LeftSideMost
case LeftSide
case Left
case Right
case RightMost
case RightMostRemoved
}
Now I'm trying to use these operators in functions like so.
func getAdjustedFrontViewPosition(_ frontViewPosition: FrontViewPosition, forSymetry symetry: Int) {
var frontViewPosition = frontViewPosition
if symetry < 0 {
frontViewPosition = .Left + symetry * (frontViewPosition - .Left)
}
}
Also in another function like so.
func rightRevealToggle(animated: Bool) {
var toggledFrontViewPosition: FrontViewPosition = .Left
if self.frontViewPosition >= .Left {
toggledFrontViewPosition = .LeftSide
}
self.setFrontViewPosition(toggledFrontViewPosition, animated: animated)
}
I know that i need to directly create the functions to allow me to use these operators. I just don't understand how to go about doing it. A little help would be greatly appreciated.
The type you are trying to define has a similar algebra to pointers in that you can add an offset to a pointer to get a pointer and subtract two pointers to get a difference. Define these two operators on your enum and your other functions will work.
Any operators over your type should produce results in your type. There are different ways to achieve this, depending on your requirements. Here we shall treat your type as a wrap-around ("modulo") one - add 1 to the last literal and you get the first. To do this we use raw values from 0 to n for your types literals and use modulo arithmetic.
First we need a modulo operator which always returns a +ve result, the Swift % can return a -ve one which is not what is required for modulo arithmetic.
infix operator %% : MultiplicationPrecedence
func %%(_ a: Int, _ n: Int) -> Int
{
precondition(n > 0, "modulus must be positive")
let r = a % n
return r >= 0 ? r : r + n
}
Now your enum assigning suitable raw values:
enum FrontViewPosition: Int
{
case None = 0
case LeftSideMostRemoved = 1
case LeftSideMost = 2
case LeftSide = 3
case Left = 4
case Right = 5
case RightMost = 6
case RightMostRemoved = 7
Now we define the appropriate operators.
For addition we can add an integer to a FrontViewPosition and get a FrontViewPosition back. To do this we convert to raw values, add, and then reduce modulo 8 to wrap-around. Note the need for a ! to return a non-optional FrontViewPosition - this will always succeed due to the modulo math:
static func +(_ x : FrontViewPosition, _ y : Int) -> FrontViewPosition
{
return FrontViewPosition(rawValue: (x.rawValue + y) %% 8)!
}
For subtraction we return the integer difference between two FrontViewPosition values:
static func -(_ x : FrontViewPosition, _ y : FrontViewPosition) -> Int
{
return x.rawValue - y.rawValue
}
}
You can define further operators as needed, say a subtraction operator which takes a FrontViewPosition and an Int and returns a FrontViewPosition.
HTH
Enum could have function~
enum Tst:Int {
case A = 10
case B = 20
case C = 30
static func + (t1:Tst,t2:Tst) -> Tst {
return Tst.init(rawValue: t1.rawValue+t2.rawValue)! //here could be wrong!
}
}
var a = Tst.A
var b = Tst.B
var c = a+b

Trial division for primes with immutable collections in Scala

I am trying to learn Scala and functional programming ideology by rewriting basic exercises. Currently I have trouble with naive approach for generating primes "trial division".
The trouble described below is that I could not rewrite well-known algorithm in functional style preserving efficiency, because I have no suitable immutable data structure, like a List but with fast operations not only on head, but also on the very end.
I started with writing java code which for every odd number tests its divisibility by already found primes (limited by square root of value being tested) - and adds it to the end of the list if no divisor was found.
http://ideone.com/QE8U0I
List<Integer> primes = new ArrayList<>();
primes.add(2);
int cur = 3;
while (primes.size() < 100000) {
for (Integer x : primes) {
if (x * x > cur) {
primes.add(cur);
break;
}
if (cur % x == 0) {
break;
}
}
cur += 2;
}
Now I tried to rewrite it in "functional way" - there was no problem with using recursion instead of loops, but I stuck with immutable collections. Core idea is as following:
http://ideone.com/4DQ6mi
def primes(n: Int) = {
#tailrec
def divisibleByAny(x: Int, list: List[Int]): Boolean = {
if (list.isEmpty) false else {
val h = list.head
h * h <= x && (x % h == 0 || divisibleByAny(x, list.tail))
}
}
#tailrec
def morePrimes(from: Int, prev: List[Int]): List[Int] = {
if (prev.size == n) prev else
morePrimes(from + 2, if (divisibleByAny(from, prev)) prev else prev :+ from)
}
morePrimes(3, List(2))
}
But it is slow - if I understand correctly because operation of adding to the end of immutable list requires creation of new copy of the whole stuff.
I searched over documentation to find more suitable data structure and tried to substitute list with immutable Queue, for it is said:
Adding items to the queue always has cost O(1) ... Removing an item is on average O(1).
But it is still even slower:
http://ideone.com/v8BsuQ
def primes(n: Int) = {
#tailrec
def divisibleByAny(x: Int, list: Queue[Int]): Boolean = {
if (list.isEmpty) false else {
val (h, t) = list.dequeue
h * h <= x && (x % h == 0 || divisibleByAny(x, t))
}
}
#tailrec
def morePrimes(from: Int, prev: Queue[Int]): Queue[Int] = {
if (prev.size == n) prev else
morePrimes(from + 2, if (divisibleByAny(from, prev)) prev else prev.enqueue(from))
}
morePrimes(3, Queue(2))
}
What is going wrong or am I missing something?
P.S. I believe there are other algorithms for generating primes which are more suitable for functional style. I think I've seen some paper. But now I'm interested in this one, or more precisely in existence of suitable data structure.
According to http://docs.scala-lang.org/overviews/collections/performance-characteristics.html Vectors have an amortised constant cost for appending, prepending and seeking. Indeed, using vectors instead of lists in your solution is much faster
def primes(n: Int) = {
#tailrec
def divisibleByAny(x: Int, list: Vector[Int]): Boolean = {
if (list.isEmpty) false else {
val (h +: t) = list
h * h <= x && (x % h == 0 || divisibleByAny(x, t))
}
}
#tailrec
def morePrimes(from: Int, prev: Vector[Int]): Vector[Int] = {
if (prev.length == n) prev else
morePrimes(from + 2, if (divisibleByAny(from, prev)) prev else prev :+ from)
}
morePrimes(3, Vector(2))
}
http://ideone.com/x3k4A3
I think you have 2 main options
Use a Vector - which is better than a list for appending. It is a Bitmapped Trie data structure (http://en.wikipedia.org/wiki/Trie). It’s “effectively” O(1) for appending to (i.e. O(1) on average)
Or...possibly the answer you're not looking for
Use a mutable data structure like ListBuffer - immutability it great to try achieve, and should be your go to collections - but sometimes for efficiency reasons, you may use mutable structures . What is key it to make sure it does not “leak out” of your classes. If you look at the List.scala implementation, you’ll see ListBuffer used a lot internally. However, its coverted back to a List just before it leaves the class. If its good enough for the core Scala libraries, its probably ok for you to use under exceptional cases that warrant it.
Except using Vector, also consider using higher-order functions instead of recursion. That's also a completely valid functional style. On my machine the following implementation of divisibleByAny is about 8x faster, than #Pyetras tailrec implementation when running primes(1000000):
def divisibleByAny(x: Int, list: Vector[Int]): Boolean =
list.view.takeWhile(el => el * el <= x).exists(x % _ == 0)

Poor performance using rbfdot from the kernlab package in a loop

Simplified example of my slowly working code (the function rbf is from the kernlab package) that needs speeding up:
install.packages('kernlab')
library('kernlab')
rbf <- rbfdot(sigma=1)
test <- matrix(NaN,nrow=5,ncol=10)
for (i in 1:5) {
for (j in 1:10) { test[i,j] <- rbf(i,j)}
}
I've tried outer() but it doesn't work because the rbf function doesn't return the required length (50). I need to speed this code up because I have a huge amount of data. I've read that vectorization would be the holy grail to speeding this up but I don't know how.
Could you please point me in the right direction?
If rbf is indeed the return value from a call to rbfdot, then body(rbf) looks something like
{
if (!is(x, "vector"))
stop("x must be a vector")
if (!is(y, "vector") && !is.null(y))
stop("y must a vector")
if (is(x, "vector") && is.null(y)) {
return(1)
}
if (is(x, "vector") && is(y, "vector")) {
if (!length(x) == length(y))
stop("number of dimension must be the same on both data points")
return(exp(sigma * (2 * crossprod(x, y) - crossprod(x) -
crossprod(y))))
}
}
Since most of this is consists of check functions, and crossprod simplifies when you are only passing in scalars, I think your function simplifies to
rbf <- function(x, y, sigma = 1)
{
exp(- sigma * (x - y) ^ 2)
}
For a possible further speedup, use the compiler package (requires R-2.14.0 or later).
rbf_loop <- function(m, n)
{
out <- matrix(NaN, nrow = m, ncol = n)
for (i in seq_len(m))
{
for (j in seq_len(n))
{
out[i,j] <- rbf(i,j)
}
}
out
)
library(compiler)
rbf_loop_cmp <- cmpfun(rbf_loop)
Then compare the timing of rbf_loop_cmp(m, n) to what you had before.
The simplification step is easier to see in reverse. If you expand (x - y) ^ 2 you get x ^ 2 - 2 * x * y + y ^ 2, which is minus the thing in the rbf function.
Use the function kernelMatrix() in kernlab,
it should be a couple a couple of order of magnitudes
faster then looping over the kernel function:
library(kernlab)
rbf <- rbfdot(sigma=1)
kernelMatrix(rbf, 1:5, 1:10)

Why is filter in front of foldLeft slow in Scala?

I wrote an answer to the first Project Euler question:
Add all the natural numbers below one thousand that are multiples of 3 or 5.
The first thing that came to me was:
(1 until 1000).filter(i => (i % 3 == 0 || i % 5 == 0)).foldLeft(0)(_ + _)
but it's slow (it takes 125 ms), so I rewrote it, simply thinking of 'another way' versus 'the faster way'
(1 until 1000).foldLeft(0){
(total, x) =>
x match {
case i if (i % 3 == 0 || i % 5 ==0) => i + total // Add
case _ => total //skip
}
}
This is much faster (only 2 ms). Why? I'm guess the second version uses only the Range generator and doesn't manifest a fully realized collection in any way, doing it all in one pass, both faster and with less memory. Am I right?
Here the code on IdeOne: http://ideone.com/GbKlP
The problem, as others have said, is that filter creates a new collection. The alternative withFilter doesn't, but that doesn't have a foldLeft. Also, using .view, .iterator or .toStream would all avoid creating the new collection in various ways, but they are all slower here than the first method you used, which I thought somewhat strange at first.
But, then... See, 1 until 1000 is a Range, whose size is actually very small, because it doesn't store each element. Also, Range's foreach is extremely optimized, and is even specialized, which is not the case of any of the other collections. Since foldLeft is implemented as a foreach, as long as you stay with a Range you get to enjoy its optimized methods.
(_: Range).foreach:
#inline final override def foreach[#specialized(Unit) U](f: Int => U) {
if (length > 0) {
val last = this.last
var i = start
while (i != last) {
f(i)
i += step
}
f(i)
}
}
(_: Range).view.foreach
def foreach[U](f: A => U): Unit =
iterator.foreach(f)
(_: Range).view.iterator
override def iterator: Iterator[A] = new Elements(0, length)
protected class Elements(start: Int, end: Int) extends BufferedIterator[A] with Serializable {
private var i = start
def hasNext: Boolean = i < end
def next: A =
if (i < end) {
val x = self(i)
i += 1
x
} else Iterator.empty.next
def head =
if (i < end) self(i) else Iterator.empty.next
/** $super
* '''Note:''' `drop` is overridden to enable fast searching in the middle of indexed sequences.
*/
override def drop(n: Int): Iterator[A] =
if (n > 0) new Elements(i + n, end) else this
/** $super
* '''Note:''' `take` is overridden to be symmetric to `drop`.
*/
override def take(n: Int): Iterator[A] =
if (n <= 0) Iterator.empty.buffered
else if (i + n < end) new Elements(i, i + n)
else this
}
(_: Range).view.iterator.foreach
def foreach[U](f: A => U) { while (hasNext) f(next()) }
And that, of course, doesn't even count the filter between view and foldLeft:
override def filter(p: A => Boolean): This = newFiltered(p).asInstanceOf[This]
protected def newFiltered(p: A => Boolean): Transformed[A] = new Filtered { val pred = p }
trait Filtered extends Transformed[A] {
protected[this] val pred: A => Boolean
override def foreach[U](f: A => U) {
for (x <- self)
if (pred(x)) f(x)
}
override def stringPrefix = self.stringPrefix+"F"
}
Try making the collection lazy first, so
(1 until 1000).view.filter...
instead of
(1 until 1000).filter...
That should avoid the cost of building an intermediate collection. You might also get better performance from using sum instead of foldLeft(0)(_ + _), it's always possible that some collection type might have a more efficient way to sum numbers. If not, it's still cleaner and more declarative...
Looking through the code, it looks like filter does build a new Seq on which the foldLeft is called. The second skips that bit. It's not so much the memory, although that can't but help, but that the filtered collection is never built at all. All that work is never done.
Range uses TranversableLike.filter, which looks like this:
def filter(p: A => Boolean): Repr = {
val b = newBuilder
for (x <- this)
if (p(x)) b += x
b.result
}
I think it's the += on line 4 that's the difference. Filtering in foldLeft eliminates it.
filter creates a whole new sequence on which then foldLeft is called. Try:
(1 until 1000).view.filter(i => (i % 3 == 0 || i % 5 == 0)).reduceLeft(_+_)
This will prevent said effect, merely wrapping the original thing. Exchanging foldLeft with reduceLeft is only cosmetic (in this case).
Now the challenge is, can you think of a yet more efficient way? Not that your solution is too slow in this case, but how well does it scale? What if instead of 1000, it was 1000000000? There is a solution that could compute the latter case just as quickly as the former.

Resources