How to fix 'constant x oveflows byte' error in go? - go

Hello I am trying to make a byte slice with constants but I get the constant x overflows byte error.
Here are my constants:
const(
Starttrame1 = 0x10A
Starttrame2 = 0x10B
Starttrame3 = 0X10C
Starttrame4 = 0X10D
Starttrame5 = 0X10E
Starttrame6 = 0x10F
)
and here is how I declare my slice:
var startValues = [6]byte{Starttrame1,Starttrame2,Startrame3,Starttrame4,Starttrame5,Starttrame6}
Everytime I build I get the constant 266 overflows byte. How should I declare my constants in order to fix this?

In Go, byte is an alias for uint8, which is the set of all unsigned 8-bit integers (0..255, both inclusive), see Spec: Numeric types. Which means a value of 0x10A = 266 cannot be stored in a value of type byte.
If you need to store those constants, use a different type, e.g. uint16:
const (
Starttrame1 = 0x10A
Starttrame2 = 0x10B
Starttrame3 = 0X10C
Starttrame4 = 0X10D
Starttrame5 = 0X10E
Starttrame6 = 0x10F
)
var data = [...]uint16{
Starttrame1, Starttrame2, Starttrame3, Starttrame4, Starttrame5, Starttrame6,
}
Try it on the Go Playground.

Related

enum value defined in hex grows and turns into negative value

I'm maintaining a project which contains following enum type definition. enum values are used in
a combobox.
Why this enum type is defined in hex like this, for performance improvement?
xxxxx920P3 is actually a negative value -2147483648 and xxxxx920P2 is positive, it causes the conditional code to fail.The next value will be twice bigger as the xxxxx920P3, so any alternative solution for this enum definition rule? thanks.
It is a QT c++ project. Can I define a enum type to ULONGLONG?
enum Version
{
xxxxx = 0x00000000,
xxxxx400 = 0x00000001,
xxxxx401 = 0x00000002,
xxxxx410 = 0x00000004,
xxxxx411 = 0x00000008,
xxxxx412 = 0x00000010,
xxxxx420 = 0x00000020,
xxxxx430 = 0x00000040,
xxxxx431 = 0x00000080,
xxxxx432 = 0x00000100,
xxxxx440 = 0x00000200,
xxxxx500 = 0x00000400,
xxxxx510 = 0x00000800,
xxxxx520 = 0x00001000,
xxxxx521 = 0x00002000,
xxxxx600 = 0x00004000,
xxxxx611 = 0x00008000,
xxxxx620 = 0x00010000,
xxxxx621 = 0x00020000,
xxxxx700 = 0x00040000,
xxxxx910 = 0x00080000,
xxxxx910P5 = 0x00100000,
xxxxx910P6 = 0x00200000,
xxxxx910P11 = 0x00400000,
xxxxx910P12 = 0x00800000,
xxxxx910P13 = 0x01000000,
xxxxx910P14 = 0x02000000,
xxxxx910P15 = 0x04000000,
xxxxx910P16 = 0x08000000,
xxxxx920 = 0x10000000,
xxxxx920P1 = 0x20000000,
xxxxx920P2 = 0x40000000,
xxxxx920P3 = 0x80000000,
};
Versions newVersions = (Version)mComboBox->itemData(inIndex).toUInt();
if ( newVersions < xxxxx500) //now newVersions is a negative value
{
}
else
{
}
In Standard C enumerators have type int and the value must be in range of int. The 0x80000000 is out of range for int, this is a constraint violation that requires a diagnostic.
So what is happening depends on your compiler. The compiler implements an extension of its own devising for enumerators out of range for int. Based on the evidence you posted, the compiler gives that enumerator a value of INT_MIN (a large negative number).
You will have to design your code to take this into account, e.g. have a specific branch of the verstion test for xxxxx920P3.
When an enumeration contains a bunch of one-bit flags it's usually so that they can be combined together, e.g. xxxxx600 | xxxxx700 | xxxxx432 giving the ability to have a single value that represents any sized set of elements.

What the difference between google.protobuf.Any and google.protobuf.Value?

I want th serialize int/int64/double/float/uint32/uint64 into protobuf, which one should I use ? which one is more effective ?
For example :
message Test {
google.protobuf.Any any = 1; // solution 1
google.protobuf.Value value = 2; // solution 2
};
message Test { // solution 3
oneof Data {
uint32 int_value = 1;
double double_value = 2;
bytes string_value = 3;
...
};
};
In your case, you'd better use oneof.
You can not pack from or unpack to a built-in type, e.g. double, int32, int64, to google.protobuf.Any. Instead, you can only pack from or unpack to a message, i.e. a class derived from google::protobuf::Message.
google.protobuf.Value, in fact, is a wrapper on oneof:
message Value {
// The kind of value.
oneof kind {
// Represents a null value.
NullValue null_value = 1;
// Represents a double value.
double number_value = 2;
// Represents a string value.
string string_value = 3;
// Represents a boolean value.
bool bool_value = 4;
// Represents a structured value.
Struct struct_value = 5;
// Represents a repeated `Value`.
ListValue list_value = 6;
}
}
Also from the definition of google.protobuf.Value, you can see, that there's no int32, int64, or unint64 fields, but only a double field. IMHO (correct me, if I'm wrong), you might lose precision if the the integer is very large. Normally, google.protobuf.Value is used with google.protobuf.Struct. Check google/protobuf/struct.proto for detail.

How to Initialize Variable to Maximum Value

I am trying to figure out how to initialize a variable in VBScript to its maximum value.
For example, in C++, I would do something like:
double x = MAX_DOUBLE;
I am not sure how to do this in VBScript.
UPDATE
For now, I have defined the variable myself as constant value in the global scope of the script. I am not sure if this is the most elegant way of doing this. Is there a built-in variable I can use?
Const MAX_DOUBLE = CDbl(1.79769313486232e307)
Const MIN_DOUBLE = CDbl(-1.79769313486232e307)
I've never found the limits described on MSDN to be accurate for many of the VBScript data types. For example, the Currency type gives me an overflow for anything > XXX.5625, even though the docs say it should go to XXX.5808. Same thing for Double. The docs say the max should be 1.79769313486232e308 but that final 2 in the mantissa causes an overflow. These are the values I've used in the past:
Const MIN_BYTE = 0
Const MAX_BYTE = 255
Const MIN_INTEGER = -32768
Const MAX_INTEGER = 32767
Const MIN_LONG = -2147483648
Const MAX_LONG = 2147483647
Const MIN_SINGLE = -3.402823e38
Const MAX_SINGLE = 3.402823e38
Const MIN_DOUBLE = -1.79769313486231e308
Const MAX_DOUBLE = 1.79769313486231e308
Const MIN_CURRENCY = -922337203685477.5625
Const MAX_CURRENCY = 922337203685477.5625
Const MIN_DATE = #100/1/1#
Const MAX_DATE = #9999/12/31#
Because VBScript uses Variants, however, note that you may not get the type you expect when assigning a "max" (or min) value to a variable. For example:
b = MAX_BYTE ' Actually type Integer
s = MAX_SINGLE ' Actually type Double
c = MAX_CURRENCY ' Actually type Double
If you want to ensure you're getting the proper data type in return, you'll need to explicitly cast:
b = CByte(MAX_BYTE) ' Type Byte
s = CSng(MAX_SINGLE) ' Type Single
c = CCur(MAX_CURRENCY) ' Type Currency

Go integer overflow settings in commandline and playground

This program runs fine one my machine (go1.2.1 linux/amd64):
package main
import "fmt"
const bigint = 1<<62
func main() {
fmt.Println(bigint)
}
But with the go playground, it gives overflow error - http://play.golang.org/p/lAUwLwOIVR
It seem that my build is configured with 64 bits for integer constans, playground configured with 32 bits.
But spec say that implementation must give at least 256 bits of precision for constants?
Also see code in my other question -- the scanner standard package has code:
const GoWhitespace = 1<<'\t' | 1<<'\n' | 1<<'\r' | 1<<' '
Since space is 32, this don't work on 32-bit playground at all.
How can this be?
Constants in general
Constants itself are not limited in precision but when used in code they are converted to a suitable type.
From the spec:
A constant may be given a type explicitly by a constant declaration or conversion, or implicitly when used in a variable declaration or an assignment or as an operand in an expression. It is an error if the constant value cannot be represented as a value of the respective type. For instance, 3.0 can be given any integer or any floating-point type, while 2147483648.0 (equal to 1<<31) can be given the types float32, float64, or uint32 but not int32 or string.
So if you have
const a = 1 << 33
fmt.Println(a)
you will get an overflow error as the default type for integer constants int can't hold the value 1 << 33 on 32 bit environments. If you convert the constant to int64 everything's fine on all platforms:
const a = 1 << 33
fmt.Println(int64(a))
Scanner
The constant GoWhitespace is not directly used in the scanner.
The Whitespace attribute used in the Scanner type is of type uint64 and GoWhitespace is assigned to it:
s.Whitespace = GoWhitespace
This means you deal with a uint64 value and 1 << ' ' (aka. 1 << 32) is perfectly valid.
Example (on play):
const w = 1<<'\t' | 1<<'\n' | 1<<'\r' | 1<<' '
c := ' '
// fmt.Println(w & (1 << uint(c))) // fails with overflow error
fmt.Println(uint64(w) & (1 << uint(c))) // works as expected
As stated by nemo, you can give a type to your constant. Just specify int64 and it works fine :)
http://play.golang.org/p/yw2vsvMigk
package main
import "fmt"
const bigint int64 = 1<<62
func main() {
fmt.Println(bigint)
}

Converting Decimal to ASCII Character

I am trying to convert an decimal number to it's character equivalent. For example:
int j = 65 // The character equivalent would be 'A'.
Sorry, forgot to specify the language. I thought I did. I am using the Cocoa/Object-C. It is really frustrating. I have tried the following but it is still not converting correctly.
char_num1 = [working_text characterAtIndex:i]; // value = 65
char_num2 = [working_text characterAtIndex:i+1]; // value = 75
char_num3 = char_num1 + char_num2; // value = 140
char_str1 = [NSString stringWithFormat:#"%c",char_num3]; // mapped value = 229
char_str2 = [char_str2 stringByAppendingString:char_str1];
When char_num1 and char_num2 are added, I get the new ascii decimal value. However, when I try to convert the new decimal value to a character, I do not get the character that is mapped to char_num3.
Convert a character to a number in C:
int j = 'A';
Convert a number to a character in C:
char ch = 65;
Convert a character to a number in python:
j = ord('A')
Convert a number to a character in Python:
ch = chr(65)
Most languages have a 'char' function, so it would be Char(j)
I'm not sure what language you're asking about. In Java, this works:
int a = 'a';
It's quite often done with "chr" or "char", but some indication of the language / platform would be useful :-)
string k = Chr(j);

Resources