How to add or subtract two enum values in swift - enums

So I have this enum that defines different view positions on a View controller when a side bar menu is presented. I need to add, subtract, multiply, or divide the different values based on different situations. How exactly do I form a method to allow me to use -, +, *, or / operators on the values in the enum. I can find plenty examples that use the compare operator ==. Although I haven't been able to find any that use >=. Which I also need to be able to do.
Here is the enum
enum FrontViewPosition: Int {
case None
case LeftSideMostRemoved
case LeftSideMost
case LeftSide
case Left
case Right
case RightMost
case RightMostRemoved
}
Now I'm trying to use these operators in functions like so.
func getAdjustedFrontViewPosition(_ frontViewPosition: FrontViewPosition, forSymetry symetry: Int) {
var frontViewPosition = frontViewPosition
if symetry < 0 {
frontViewPosition = .Left + symetry * (frontViewPosition - .Left)
}
}
Also in another function like so.
func rightRevealToggle(animated: Bool) {
var toggledFrontViewPosition: FrontViewPosition = .Left
if self.frontViewPosition >= .Left {
toggledFrontViewPosition = .LeftSide
}
self.setFrontViewPosition(toggledFrontViewPosition, animated: animated)
}
I know that i need to directly create the functions to allow me to use these operators. I just don't understand how to go about doing it. A little help would be greatly appreciated.

The type you are trying to define has a similar algebra to pointers in that you can add an offset to a pointer to get a pointer and subtract two pointers to get a difference. Define these two operators on your enum and your other functions will work.
Any operators over your type should produce results in your type. There are different ways to achieve this, depending on your requirements. Here we shall treat your type as a wrap-around ("modulo") one - add 1 to the last literal and you get the first. To do this we use raw values from 0 to n for your types literals and use modulo arithmetic.
First we need a modulo operator which always returns a +ve result, the Swift % can return a -ve one which is not what is required for modulo arithmetic.
infix operator %% : MultiplicationPrecedence
func %%(_ a: Int, _ n: Int) -> Int
{
precondition(n > 0, "modulus must be positive")
let r = a % n
return r >= 0 ? r : r + n
}
Now your enum assigning suitable raw values:
enum FrontViewPosition: Int
{
case None = 0
case LeftSideMostRemoved = 1
case LeftSideMost = 2
case LeftSide = 3
case Left = 4
case Right = 5
case RightMost = 6
case RightMostRemoved = 7
Now we define the appropriate operators.
For addition we can add an integer to a FrontViewPosition and get a FrontViewPosition back. To do this we convert to raw values, add, and then reduce modulo 8 to wrap-around. Note the need for a ! to return a non-optional FrontViewPosition - this will always succeed due to the modulo math:
static func +(_ x : FrontViewPosition, _ y : Int) -> FrontViewPosition
{
return FrontViewPosition(rawValue: (x.rawValue + y) %% 8)!
}
For subtraction we return the integer difference between two FrontViewPosition values:
static func -(_ x : FrontViewPosition, _ y : FrontViewPosition) -> Int
{
return x.rawValue - y.rawValue
}
}
You can define further operators as needed, say a subtraction operator which takes a FrontViewPosition and an Int and returns a FrontViewPosition.
HTH

Enum could have function~
enum Tst:Int {
case A = 10
case B = 20
case C = 30
static func + (t1:Tst,t2:Tst) -> Tst {
return Tst.init(rawValue: t1.rawValue+t2.rawValue)! //here could be wrong!
}
}
var a = Tst.A
var b = Tst.B
var c = a+b

Related

How to make a traditional for-loop in Kotlin

JavaScript
for (var x = 0; x < 360; x += 0.5)
{
// ...
}
How do I do this in Kotlin?
Note that my step size is a floating point and so a regular range won't work:
for (x in 0.0 until 360.0 step 0.5) {
// ...
}
I also need to exclude the ending value, hence why I'm using until.
I will resort to a while loop for now:
var x = 0.0;
while (x < 360.0) {
// ...
x += 0.5
}
There isn't a way to do this right now in Kotlin because Kotlin does not have "traditional" for loops. I believe you're right in choosing a while loop. In fact, traditional for loops are just while loops in disguise:
for (init; condition; post;) {
// do something
}
can always be rewritten,
init
while (condition) {
// do something
post
}
with no change in behavior, because the init statement will always execute and the condition will always be checked before the loop runs even once. One thing this construct can't give you is a variable that's only scoped to this block. If you're really after that behavior, the following would work, though it's not very idiomatic.
for (x in generateSequence(0.0) { it + 0.5 }.takeWhile { it < 360.0}) {
println(x)
}
If you're using a Sequence, you might also be interested in the more idiomatic forEach:
generateSequence(0.0) { it + 0.5 }.takeWhile { it < 360.0 }.forEach { x ->
// do something
}
As repeated addition is just a shorthand for multiplication you can look at for (x in 0.0 until 360.0 step 0.5) as "do something for as many times as 0.5 fits in 360". Therefore it would make sense to express it like this:
val times = (360 / 0.5).toInt()
repeat(times){
}
of cause you can also inline times to
repeat(times = (360 / 0.5).toInt()){
}
The advantage is that you skip the sequence generation. Con -- you lose the access to your counting variable x.
see also https://kotlinlang.org/api/latest/jvm/stdlib/kotlin/repeat.html
I agree with Answer by #AdamYakes.
Since the until operator is not available for Floating value type, you can not use
it for floating step value.
However, If you still want to use
for (x in 0.0 until 360.0 step 0.5) {
// use value
}
you can do that as following:
for (x in 0 until 3600 step 5) {
// use (value / 10)
}
Kotlin is so flexible that one could hack a custom solution:
private infix fun (((Double) -> Unit, Double) -> Unit).step(step: Double) =
fun(action: (Double) -> Unit) = this(action, step)
private infix fun Double.upto(to: Double) =
fun(action: (Double) -> Unit, step: Double) {
var v = this
while (v < to) {
action(v)
v += step
}
}
fun main() {
// Usage:
(0.0 upto 360.0 step 0.5) {
println(it)
}
}
What I did above is to create two infix functions.
The upto() infix function can be used with two Doubles: it takes 0.0 upto 360.0 and returns a function that expects two arguments: an action function and the step double value.
The step infix function can be used with functions returned by upto() combined with a Double (so f step 0.5). It returns a function that calls the upto() function passing the step value and the user's { ... } block that gets called for each iteration.
Short and convoluted.
A simpler alternative:
inline fun doubleFor(
from: Double, to: Double, step: Double, action: (Double) -> Unit
) {
var v = from
while (v < to) {
action(v)
v += step
}
}
fun main() {
// Usage:
doubleFor(0.0, 360.0, 0.5) {
println(it)
}
}
A good IDE would show the argument names next to the values making the meaning of the arguments in the usage code obvious.
A third approach can be found at https://stackoverflow.com/a/44332139/2933899 which allows one to write for(x in 0.0 .. 360.0 step 0.5) { println(x) } with the inconvenience that the top boundary value is included.
Run these at https://pl.kotl.in/ZuR354Fas

How count how many one bit have in byte, in Golang?

Suppose I have two variables, that only use 6 bits:
var a byte = 31 // 00011111
var b byte = 50 // 00110010
The first (a) have more one bits than the b, however the b is greater than a of course, so is not possible use a > b.
To achieve what I need, I do one loop:
func countOneBits(byt byte) int {
var counter int
var divider byte
for divider = 32; divider >= 1; divider >>= 1 {
if byt & divider == divider {
counter++
}
}
return counter
}
This works, I can use countOneBits(a) > countOneBits(b)...
But I don't think is the best solution for this case, I don't think this need a loop and because of it I'm here.
Have a better alternative (in performance aspect) to count how many 1 have in six bits?
Given that the input is a single byte probably a lookup table is the best option... only takes 256 bytes and you get code like
var count = bitcount[input];
Given that this function will be available in the packagemath/bits in the next Go release (1.9 this August) here is the code for a 32-bit integer.
// OnesCount32 returns the number of one bits ("population count") in x.
func OnesCount32(x uint32) int {
return int(pop8tab[x>>24] + pop8tab[x>>16&0xff] + pop8tab[x>>8&0xff] + pop8tab[x&0xff])
}
Where the pop8tab is defined here. And for your question in particular : 8bits
func OnesCount8(x uint8) int {
return int(pop8tab[x])
}
It is also possible to count bits with binary operations. See this bit twiddling hacks.
func bitSetCount(v byte) byte {
v = (v & 0x55) + ((v>>1) & 0x55)
v = (v & 0x33) + ((v>>2) & 0x33)
return (v + (v>>4)) & 0xF
}
You'll have to benchmark to see if this is faster than the lookup table which is the simplest to implement.
there is POPCNT golang version:
https://github.com/tmthrgd/go-popcount

One-liner to generate Powerball picks in Swift?

With the U.S.'s large $1.5 Billion lottery this week, I wrote a function in Ruby to make Powerball picks. In Powerball, you choose 5 numbers from the range 1..69 (with no duplicates) and 1 number from the range 1..26.
This is what I came up with:
def pball
Array(1..69).shuffle[0..4].sort + [rand(1..26)]
end
It works by creating an array of integers from 1 to 69, shuffling that array, choosing the first 5 numbers, sorting those, and finally adding on a number from 1 to 26.
To do this in Swift takes a bit more work since Swift doesn't have the built-in shuffle method on Array.
This was my attempt:
func pball() -> [Int] {
let arr = Array(1...69).map{($0, drand48())}.sort{$0.1 < $1.1}.map{$0.0}[0...4].sort()
return arr + [Int(arc4random_uniform(26) + 1)]
}
Since there is no shuffle method, it works by creating an [Int] with values in the range 1...69. It then uses map to create [(Int, Double)], an array of tuple pairs that contain the numbers and a random Double in the range 0.0 ..< 1.0. It then sorts this array using the Double values and uses a second map to return to [Int] and then uses the slice [0...4] to extract the first 5 numbers and sort() to sort them.
In the second line, it appends a number in the range 1...26. I tried adding this to the first line, but Swift gave the error:
Expression was too complex to be solved in reasonable time; consider
breaking up the expression into distinct sub-expressions.
Can anyone suggest how to turn this into a 1-line function? Perhaps there is a better way to choose the 5 numbers from 1...69.
Xcode 8.3 • Swift 3.1
import GameKit
var powerballNumbers: [Int] {
return (GKRandomSource.sharedRandom().arrayByShufflingObjects(in: Array(1...69)) as! [Int])[0..<5].sorted() + [Int(arc4random_uniform(26) + 1)]
}
powerballNumbers // [5, 9, 62, 65, 69, 2]
Swift 2.x
import GameKit
var powerballNumbers: [Int] {
return (GKRandomSource.sharedRandom().arrayByShufflingObjectsInArray(Array(1...69)) as! [Int])[0...4].sort() + [Int(arc4random_uniform(26).successor())]
}
powerballNumbers // [21, 37, 39, 42, 65, 23]
I don't find the "one-liner" concept very compelling. Some languages lend themselves to it; others don't. I would suggest giving Swift a shuffle method to start with:
extension Array {
mutating func shuffle () {
for var i = self.count - 1; i != 0; i-- {
let ix1 = i
let ix2 = Int(arc4random_uniform(UInt32(i+1)))
(self[ix1], self[ix2]) = (self[ix2], self[ix1])
}
}
}
But since I made this mutating, we still need more than one line to express the entire operation because we have to have a var reference to our starting array:
var arr = Array(1...69)
(1...4).forEach {_ in arr.shuffle()}
let result = Array(arr[0..<5]) + [Int(arc4random_uniform(26)) + 1]
If you really insist on the one-liner, and you don't count the code needed to implement shuffle, then you can do it, though less efficiently, by defining shuffle more like this:
extension Array {
func shuffle () -> [Element] {
var arr = self
for var i = arr.count - 1; i != 0; i-- {
let ix1 = i
let ix2 = Int(arc4random_uniform(UInt32(i+1)))
(arr[ix1], arr[ix2]) = (arr[ix2], arr[ix1])
}
return arr
}
}
And here's your one-liner:
let result = Array(1...69).shuffle().shuffle().shuffle().shuffle()[0..<5] + [Int(arc4random_uniform(26)) + 1]
But oops, I omitted your sort. I don't see how to do that without getting the "too complex" error; to work around that, I had to split it into two lines:
var result = Array(1...69).shuffle().shuffle().shuffle().shuffle()[0..<5].sort(<)
result.append(Int(arc4random_uniform(26)) + 1)
How about this:
let winningDraw = (1...69).sort{ _ in arc4random_uniform(2) > 0}[0...4].sort() + [Int(arc4random_uniform(26)+1)]
[edit] above formula wasn't random. but this one will be
(1...69).map({Int(rand()%1000*70+$0)}).sort().map({$0%70})[0...4].sort() + [Int(rand()%26+1)]
For the fun of it, a non-GameplayKit (long) one-liner for Swift 3, using the global sequence(state:next:) function to generate random elements from the mutable state array rather than shuffling the array (although mutating the value array 5 times, so some extra copy operations here...)
let powerballNumbers = Array(sequence(state: Array(1...69), next: {
(s: inout [Int]) -> Int? in s.remove(at: Int(arc4random_uniform(UInt32(s.count))))})
.prefix(5).sorted()) + [Int(arc4random_uniform(26) + 1)]
... broken down for readability.
(Possible in future Swift version)
If the type inference weren't broken inout closure parameters (as arguments to closures), we could reduce the above to:
let powerballNumbers = Array(sequence(state: Array(1...69), next: {
$0.remove(at: Int(arc4random_uniform(UInt32($0.count)))) })
.prefix(5).sorted()) + [Int(arc4random_uniform(26) + 1)]
If we'd also allow the following extension
extension Int {
var rand: Int { return Int(arc4random_uniform(UInt32(exactly: self) ?? 0)) }
}
Then, we could go on to reduce the one-line to:
let powerballNumbers = Array(sequence(state: Array(1...69), next: { $0.remove(at: $0.count.rand) }).prefix(5).sorted()) + [26.rand + 1]
Xcode 10 • Swift 4.2
Swift now has added shuffled() to ClosedRange and random(in:) to Int which now makes this easily accomplished in one line:
func pball() -> [Int] {
return (1...69).shuffled().prefix(5).sorted() + [Int.random(in: 1...26)]
}
Further trimmings:
Because of the return type of pball(), the Int can be inferred in the random method call. Also, .prefix(5) can be replaced with [...4]. Finally, return can be omitted from the one-line function:
func pball() -> [Int] {
(1...69).shuffled()[...4].sorted() + [.random(in: 1...26)]
}

Does adding a semicolon at the end of `return` make a difference?

The Rust Guide states that:
The semicolon turns any expression into a statement by throwing away its value and returning unit instead.
I thought I got this concept down until I ran an experiment:
fn print_number(x: i32, y: i32) -> i32 {
if x + y > 20 { return x }
x + y
}
Which compiles fine. Then, I added a semicolon at the end of the return line (return x;). From what I understand, this turns the line into a statement, returning the unit data type ().
Nonetheless, the end result is the same.
Normally, every branch in the if expression should have the same type. If the type for some branch is underspecified, the compiler tries to find the single common type:
fn print_number(x: int, y: int) {
let v = if x + y > 20 {
3 // this can be either 3u, 3i, 3u8 etc.
} else {
x + y // this is always int
};
println!("{}", v);
}
In this code, 3 is underspecified but the else branch forces it to have the type of int.
This sounds simple: There is a function that "unifies" two or more types into the common type, or it will give you an error when that's not possible. But what if there were a fail! in the branch?
fn print_number(x: int, y: int) {
let v = if x + y > 20 {
fail!("x + y too large") // ???
} else {
x + y // this is always int
};
println!("{}", v); // uh wait, what's the type of `v`?
}
I'd want that fail! does not affect other branches, it is an exceptional case after all. Since this pattern is quite common in Rust, the concept of diverging type has been introduced. There is no value which type is diverging. (It is also called an "uninhabited type" or "void type" depending on the context. Not to be confused with the "unit type" which has a single value of ().) Since the diverging type is naturally a subset of any other types, the compiler conclude that v's type is just that of the else branch, int.
Return expression is no different from fail! for the purpose of type checking. It abruptly escapes from the current flow of execution just like fail! (but does not terminate the task, thankfully). Still, the diverging type does not propagate to the next statement:
fn print_number(x: int, y: int) {
let v = if x + y > 20 {
return; // this is diverging
() // this is implied, even when you omit it
} else {
x + y // this is always int
};
println!("{}", v); // again, what's the type of `v`?
}
Note that the sole semicoloned statement x; is equivalent to the expression x; (). Normally a; b has the same type as b, so it would be quite strange that x; () has a type of () only when x is not diverging, and it diverges when x does diverge. That's why your original code didn't work.
It is tempting to add a special case like that:
Why don't you make x; () diverging when x diverges?
Why don't you assume uint for every underspecified integer literal when its type cannot be inferred? (Note: this was the case in the past.)
Why don't you automatically find the common supertrait when unifying multiple trait objects?
The truth is that, designing the type system is not very hard, but verifying it is much harder and we want to ensure that Rust's type system is future-proof and long standing. Some of them may happen if it really is useful and it is proved "correct" for our purpose, but not immediately.
I'm not 100% sure of what I'm saying but it kinda makes sense.
There's an other concept coming into play: reachability analysis. The compiler knows that what follows a return expression statement is unreachable. For example, if we compile this function:
fn test() -> i32 {
return 1;
2
}
We get the following warning:
warning: unreachable expression
--> src/main.rs:3:5
|
3 | 2
| ^
|
The compiler can ignore the "true" branch of the if expression if it ends with a return expression and only consider the "false" branch when determining the type of the if expression.
You can also see this behavior with diverging functions. Diverging functions are functions that don't return normally (e.g. they always fail). Try replacing the return expression with the fail! macro (which expands to a call to a diverging function). In fact, return expressions are also considered to be diverging; this is the basis of the aforementioned reachability analysis.
However, if there's an actual () expression after the return statement, you'll get an error. This function:
fn print_number(x: i32, y: i32) -> i32 {
if x + y > 20 {
return x;
()
} else {
x + y
}
}
gives the following error:
error[E0308]: mismatched types
--> src/main.rs:4:9
|
4 | ()
| ^^ expected i32, found ()
|
= note: expected type `i32`
found type `()`
In the end, it seems diverging expressions (which includes return expressions) are handled differently by the compiler when they are followed by a semicolon: the statement is still diverging.

Project Euler 16 - Help in solving it

I'm solving Project Euler problem 16, I've ended up with a code that can logically solve it, but is unable to process as I believe its overflowing or something? I tried int64 in place of int but it just prints 0,0. If i change the power to anything below 30 it works, but above 30 it does not work, Can anyone point out my mistake? I believe its not able to calculate 2^1000.
// PE_16 project main.go
package main
import (
"fmt"
)
func power(x, y int) int {
var pow int
var final int
final = 1
for pow = 1; pow <= y; pow++ {
final = final * x
}
return final
}
func main() {
var stp int
var sumfdigits int
var u, t, h, th, tth, l int
stp = power(2,1000)
fmt.Println(stp)
u = stp / 1 % 10
t = stp / 10 % 10
h = stp / 100 % 10
th = stp / 1000 % 10
tth = stp / 10000 % 10
l = stp / 100000 % 10
sumfdigits = u + t + h + th + tth + l
fmt.Println(sumfdigits)
}
Your approach to this problem requires exact integer math up to 1000 bits in size. But you're using int which is 32 or 64 bits. math/big.Int can handle such task. I intentionally do not provide a ready made solution using big.Int as I assume your goal is to learn by doing it by yourself, which I believe is the intent of Project Euler.
As noted by #jnml, ints aren't large enough; if you wish to calculate 2^1000 in Go, big.Ints are a good choice here. Note that math/big provides the Exp() method which will be easier to use than converting your power function to big.Ints.
I worked through some Project Euler problems about a year ago, doing them in Go to get to know the language. I didn't like the ones that required big.Ints, which aren't so easy to work with in Go. For this one, I "cheated" and did it in one line of Ruby:
Removed because I remembered it was considered bad form to show a working solution, even in a different language.
Anyway, my Ruby example shows another thing I learned with Go's big.Ints: sometimes it's easier to convert them to a string and work with that string than to work with the big.Int itself. This problem strikes me as one of those cases.
Converting my Ruby algorithm to Go, I only work with big.Ints on one line, then it's easy to work with the string and get the answer in just a few lines of code.
You don't need to use math/big. Below is a school boy maths way of doubling a decimal number as a hint!
xs holds the decimal digits in least significant first order. Pass in a pointer to the digits (pxs) as the slice might need to get bigger.
func double(pxs *[]int) {
xs := *pxs
carry := 0
for i, x := range xs {
n := x*2 + carry
if n >= 10 {
carry = 1
n -= 10
} else {
carry = 0
}
xs[i] = n
}
if carry != 0 {
*pxs = append(xs, carry)
}
}

Resources