Swift: convert a 2's complement number in String form into an Double - swift2

I have a sensor which is generating 16 bit values in 2's complement in a string form. I need to do some maths with these value so ultimately I need to convert the to Doubles. Where "f112" is 2's complement example value which will generates -3822.
I am very much a Swift beginner and feel there must be an easier way?
let value2 = UInt32("f112",radix:16)
if value2 > 32767 {
// handle -ve values
value5 = Int32(bitPattern:value2! | 0xFFFF0000)
} else {
// +ve
value5 = Int32(bitPattern:value2! )
}
let doubleValue = Double(value5)

There are probably several solutions, this is one.
First it creates the UInt16 value and converts it to Int to be able to do 32 bit math.
Then it subtracts 0x10000 (65536) if the most significant bit is set.
let value2 = Int(UInt16("f112",radix:16)!)
let doubleValue = value2 > 0x7fff ? Double(value2 - 0x10000) : Double(value2)
or using the bitwise NOT operator (~)
let value2 = UInt16("f112",radix:16)!
let doubleValue = value2 > 0x7fff ? -Double(~value2 + 1) : Double(value2)

my solution is a little bit different from vadian's
let str = "f112"
// if you are sure about str (so force unwrapping is fine)
let d = Double(Int16(bitPattern: UInt16(str, radix: 16)!)) // -3822
// or more 'safe' version, which return 0 in case of invalid parameter
let d0 = Double(Int16(bitPattern: UInt16(str, radix: 16) ?? 0)) // -3822
by the way
Int16("f112", radix: 16) == nil // true!
looks like a bug for me ...

Related

static_cast use to convert int to char

I have written this code to convert Decimal to binary:
string Solution::findDigitsInBinary(int A) {
if(A == 0 )
return "0" ;
else
{
string bin = "";
while(A > 0)
{
int rem = (A % 2);
bin.push_back(static_cast<char>(A % 2));
A = A/2 ;
}
reverse(bin.begin(),bin.end()) ;
return bin ;
}
}
But not getting the desired result using static_cast.
I have seen something related to this that is giving the desired result :
(char)('0'+ rem).
What's the difference between static_cast? why I am not getting the correct binary output?
With:
(char) '0' + rem;
The important difference is not the cast, but that the remainder, which always results in 0 or 1, is added to the character '0', which means that you adding a character of '0' or '1' to your string.
In your version you are adding either the integer representation of 0 or 1, but the string representations of 0 and 1 are either 48 or 49. By adding the remainder of 0 or 1 to '0' it gives a value of either 48 (character 0) or 49 (character 1).
If you do the same thing in your code it will also work.
string findDigitsInBinary(int A) {
if (A == 0)
return "0";
else
{
string bin = "";
while (A > 0)
{
int rem = (A % 2);
bin.push_back(static_cast<char>(A % 2 + '0')); // Remainder + '0'
A = A / 2;
}
reverse(bin.begin(), bin.end());
return bin;
}
Basically you should be adding characters to the string, and not numbers. So you shouldn't be adding 0 and 1 to the string, you should be adding the numbers 48 (character 0) and 49 (character 1).
This chart might illustrate better. See how the character value/digit '0' is 48 in decimal? Let's just say you wanted to add the digit 4 to the string, then because decimal 48 is 0, then you would actually want to add the decimal value of 52 to the string, 48 + 4. This is what the '0' + rem does. This is done automatically for you if you insert a character, that is, if you do:
mystring += 'A';
It will add an 'A' character to your string, but what it's actually doing in reality is converting that 'A' to decimal 65 and adding it to the string. What you have in your code is you're adding decimal numbers/integers 0 and 1, and these aren't characters in the Unicode/ASCII representation.
Now that you understand how characters are encoded, to cast an integer to a char does not change the decimal/integer to its character representation, but it changes the data type from int to char, a 4-byte data type (most likely) to a 1-byte data type. Your cast did the following:
After the modulo % operation you got a result of either 1 or 0 as an integer, let's just say you got a 1 remainder, it would look like this as an int:
00000000 00000000 00000000 00000001
After the cast to a char it would convert it to a one-byte data type, which would make it look like this:
00000001 // Now it's a one-byte data type
Whereas what a '1' digit looks like encoded as a string character is 49, which looks like this:
00110000
As for the difference between static_cast and c-style cast, the static_cast does compile-time checks and allows casts between certain types based on particular rules, whereas a c-style cast isn't as restrictive.
char a = 5;
int* p = static_cast<int*>(&a); // Will not compile
int* p2 = (int*)&a; // Will compile and run, but is discouraged as there are risks.
*p2 = 7; // You've written past the single byte char into 3 extra bytes, which is an access violation, or undefined behaviour.

How count how many one bit have in byte, in Golang?

Suppose I have two variables, that only use 6 bits:
var a byte = 31 // 00011111
var b byte = 50 // 00110010
The first (a) have more one bits than the b, however the b is greater than a of course, so is not possible use a > b.
To achieve what I need, I do one loop:
func countOneBits(byt byte) int {
var counter int
var divider byte
for divider = 32; divider >= 1; divider >>= 1 {
if byt & divider == divider {
counter++
}
}
return counter
}
This works, I can use countOneBits(a) > countOneBits(b)...
But I don't think is the best solution for this case, I don't think this need a loop and because of it I'm here.
Have a better alternative (in performance aspect) to count how many 1 have in six bits?
Given that the input is a single byte probably a lookup table is the best option... only takes 256 bytes and you get code like
var count = bitcount[input];
Given that this function will be available in the packagemath/bits in the next Go release (1.9 this August) here is the code for a 32-bit integer.
// OnesCount32 returns the number of one bits ("population count") in x.
func OnesCount32(x uint32) int {
return int(pop8tab[x>>24] + pop8tab[x>>16&0xff] + pop8tab[x>>8&0xff] + pop8tab[x&0xff])
}
Where the pop8tab is defined here. And for your question in particular : 8bits
func OnesCount8(x uint8) int {
return int(pop8tab[x])
}
It is also possible to count bits with binary operations. See this bit twiddling hacks.
func bitSetCount(v byte) byte {
v = (v & 0x55) + ((v>>1) & 0x55)
v = (v & 0x33) + ((v>>2) & 0x33)
return (v + (v>>4)) & 0xF
}
You'll have to benchmark to see if this is faster than the lookup table which is the simplest to implement.
there is POPCNT golang version:
https://github.com/tmthrgd/go-popcount

How to add or subtract two enum values in swift

So I have this enum that defines different view positions on a View controller when a side bar menu is presented. I need to add, subtract, multiply, or divide the different values based on different situations. How exactly do I form a method to allow me to use -, +, *, or / operators on the values in the enum. I can find plenty examples that use the compare operator ==. Although I haven't been able to find any that use >=. Which I also need to be able to do.
Here is the enum
enum FrontViewPosition: Int {
case None
case LeftSideMostRemoved
case LeftSideMost
case LeftSide
case Left
case Right
case RightMost
case RightMostRemoved
}
Now I'm trying to use these operators in functions like so.
func getAdjustedFrontViewPosition(_ frontViewPosition: FrontViewPosition, forSymetry symetry: Int) {
var frontViewPosition = frontViewPosition
if symetry < 0 {
frontViewPosition = .Left + symetry * (frontViewPosition - .Left)
}
}
Also in another function like so.
func rightRevealToggle(animated: Bool) {
var toggledFrontViewPosition: FrontViewPosition = .Left
if self.frontViewPosition >= .Left {
toggledFrontViewPosition = .LeftSide
}
self.setFrontViewPosition(toggledFrontViewPosition, animated: animated)
}
I know that i need to directly create the functions to allow me to use these operators. I just don't understand how to go about doing it. A little help would be greatly appreciated.
The type you are trying to define has a similar algebra to pointers in that you can add an offset to a pointer to get a pointer and subtract two pointers to get a difference. Define these two operators on your enum and your other functions will work.
Any operators over your type should produce results in your type. There are different ways to achieve this, depending on your requirements. Here we shall treat your type as a wrap-around ("modulo") one - add 1 to the last literal and you get the first. To do this we use raw values from 0 to n for your types literals and use modulo arithmetic.
First we need a modulo operator which always returns a +ve result, the Swift % can return a -ve one which is not what is required for modulo arithmetic.
infix operator %% : MultiplicationPrecedence
func %%(_ a: Int, _ n: Int) -> Int
{
precondition(n > 0, "modulus must be positive")
let r = a % n
return r >= 0 ? r : r + n
}
Now your enum assigning suitable raw values:
enum FrontViewPosition: Int
{
case None = 0
case LeftSideMostRemoved = 1
case LeftSideMost = 2
case LeftSide = 3
case Left = 4
case Right = 5
case RightMost = 6
case RightMostRemoved = 7
Now we define the appropriate operators.
For addition we can add an integer to a FrontViewPosition and get a FrontViewPosition back. To do this we convert to raw values, add, and then reduce modulo 8 to wrap-around. Note the need for a ! to return a non-optional FrontViewPosition - this will always succeed due to the modulo math:
static func +(_ x : FrontViewPosition, _ y : Int) -> FrontViewPosition
{
return FrontViewPosition(rawValue: (x.rawValue + y) %% 8)!
}
For subtraction we return the integer difference between two FrontViewPosition values:
static func -(_ x : FrontViewPosition, _ y : FrontViewPosition) -> Int
{
return x.rawValue - y.rawValue
}
}
You can define further operators as needed, say a subtraction operator which takes a FrontViewPosition and an Int and returns a FrontViewPosition.
HTH
Enum could have function~
enum Tst:Int {
case A = 10
case B = 20
case C = 30
static func + (t1:Tst,t2:Tst) -> Tst {
return Tst.init(rawValue: t1.rawValue+t2.rawValue)! //here could be wrong!
}
}
var a = Tst.A
var b = Tst.B
var c = a+b

Print Double as Int - if not a Double value

I want my Double to display as an Int, if the value is an integer - otherwise as a Double.
Example;
var Value = Double()
.
Value = 25.0 / 10.0
Now I want Value to display 2.5 (when inserted to label)
.
Value = 20.0 / 10.0
Now I want Value to display 2 - and NOT 2.0
One approach is to obtain the fractional part using % operator, and check if it is zero:
let stringVal = (Value % 1 == 0)
? String(format: "%.0f", Value)
: String(Value)
One classic way is to establish a value for epsilon which represents your tolerance for considering a value close enough to an Int:
// How close is close enough to be considered an Int?
let kEPSILON = 0.0001
var val = 1.9999
var str: String
if abs(val - round(val)) < kEPSILON {
str = String(Int(round(val)))
} else {
str = String(val)
}
print(str) // "2"
I like dasblinkenlight's and vacawama's answers, but also want to contribute another one: Using NSNumberFormatter
let formatter = NSNumberFormatter()
formatter.numberStyle = .DecimalStyle
formatter.alwaysShowsDecimalSeparator = false
let string0 = formatter.stringFromNumber(25.0/10.0)!
let string1 = formatter.stringFromNumber(20.0/10.0)!
print(string0)
print(string1)
result:
2.5
2
The most important advantage: It is localized. On german devices it will show 2,5 instead of 2.5, just as it would be expected by a german speaking user.
To display numbers as text, use NSNumberFormatter(). You can set its minimumFractionDigits
property to zero:
let fmt = NSNumberFormatter()
fmt.minimumIntegerDigits = 1
fmt.maximumFractionDigits = 4
fmt.minimumFractionDigits = 0
print(fmt.stringFromNumber(25.0 / 10.0)!) // 2,5
print(fmt.stringFromNumber(20.0 / 10.0)!) // 2
print(fmt.stringFromNumber(2.0 / 7.0)!) // 0,2857
If you want a decimal period, independent of the user's locale,
then add
fmt.locale = NSLocale(localeIdentifier: "en_US_POSIX")
Swift 3:
let fmt = NumberFormatter()
// Optional:
fmt.locale = Locale(identifier: "en_US_POSIX")
fmt.minimumIntegerDigits = 1
fmt.maximumFractionDigits = 4
fmt.minimumFractionDigits = 0
print(fmt.string(from: 25.0 / 10.0 as NSNumber)!) // 2,5
print(fmt.string(from: 20.0 / 10.0 as NSNumber)!) // 2
print(fmt.string(from: 2.0 / 7.0 as NSNumber)!) // 0,2857
Working on a calculator on Swift 4, I treated the number variables as String so I could display them on screen and converted them to Double for the calculations, then convert them back to String to display the result. When the result was an Int I didn't want the .0 to be displayed as well so I worked this out and it was pretty simple
if result.truncatingRemainder(dividingBy: 1) == 0{
screenLabel.text = String(Int(result))
}
else{
screenLabel.text = String(result)
}
so result is the variable in Double format, if divided by 1 it gives us 0 (perfect division means its an Int), I convert it in Int.

Program for counting identical bits in 2 hex values in Swift

I am new to programming in Swift and my task is to create a program that will return a number of identical bits in 2 hexadecimal values. I wrote something like this:
let str = "8ef4b5013e183e12eab15dc28eb3de29"
var hexAr = Array(str)
let str2 = "b19d3fa46038d6bd1d7ae5e915fb68b3"
var hex2Ar = Array(str2)
let result = 0
for i in hexAr {
//conversion to binary...
for i in binAr {
if binAr[i] == bin2Ar[i]
{result++}
}
}
Now, I found a method that converts hex value to array with binary representation:
let hexValue = 0x5
let bin = String(hexValue, radix:2) //output: 101
let binArray = Array(bin) // output: [1,0,1]
The problem of the above-mentioned method is that it returns 101, not 0101, for which I need to compare identical bits in both hex values. Then, comparing 5 (0101) and D (1101) will add 3 to the result.
Thanks for helping :)

Resources