Go integer overflow settings in commandline and playground - go

This program runs fine one my machine (go1.2.1 linux/amd64):
package main
import "fmt"
const bigint = 1<<62
func main() {
fmt.Println(bigint)
}
But with the go playground, it gives overflow error - http://play.golang.org/p/lAUwLwOIVR
It seem that my build is configured with 64 bits for integer constans, playground configured with 32 bits.
But spec say that implementation must give at least 256 bits of precision for constants?
Also see code in my other question -- the scanner standard package has code:
const GoWhitespace = 1<<'\t' | 1<<'\n' | 1<<'\r' | 1<<' '
Since space is 32, this don't work on 32-bit playground at all.
How can this be?

Constants in general
Constants itself are not limited in precision but when used in code they are converted to a suitable type.
From the spec:
A constant may be given a type explicitly by a constant declaration or conversion, or implicitly when used in a variable declaration or an assignment or as an operand in an expression. It is an error if the constant value cannot be represented as a value of the respective type. For instance, 3.0 can be given any integer or any floating-point type, while 2147483648.0 (equal to 1<<31) can be given the types float32, float64, or uint32 but not int32 or string.
So if you have
const a = 1 << 33
fmt.Println(a)
you will get an overflow error as the default type for integer constants int can't hold the value 1 << 33 on 32 bit environments. If you convert the constant to int64 everything's fine on all platforms:
const a = 1 << 33
fmt.Println(int64(a))
Scanner
The constant GoWhitespace is not directly used in the scanner.
The Whitespace attribute used in the Scanner type is of type uint64 and GoWhitespace is assigned to it:
s.Whitespace = GoWhitespace
This means you deal with a uint64 value and 1 << ' ' (aka. 1 << 32) is perfectly valid.
Example (on play):
const w = 1<<'\t' | 1<<'\n' | 1<<'\r' | 1<<' '
c := ' '
// fmt.Println(w & (1 << uint(c))) // fails with overflow error
fmt.Println(uint64(w) & (1 << uint(c))) // works as expected

As stated by nemo, you can give a type to your constant. Just specify int64 and it works fine :)
http://play.golang.org/p/yw2vsvMigk
package main
import "fmt"
const bigint int64 = 1<<62
func main() {
fmt.Println(bigint)
}

Related

How to fix 'constant x oveflows byte' error in go?

Hello I am trying to make a byte slice with constants but I get the constant x overflows byte error.
Here are my constants:
const(
Starttrame1 = 0x10A
Starttrame2 = 0x10B
Starttrame3 = 0X10C
Starttrame4 = 0X10D
Starttrame5 = 0X10E
Starttrame6 = 0x10F
)
and here is how I declare my slice:
var startValues = [6]byte{Starttrame1,Starttrame2,Startrame3,Starttrame4,Starttrame5,Starttrame6}
Everytime I build I get the constant 266 overflows byte. How should I declare my constants in order to fix this?
In Go, byte is an alias for uint8, which is the set of all unsigned 8-bit integers (0..255, both inclusive), see Spec: Numeric types. Which means a value of 0x10A = 266 cannot be stored in a value of type byte.
If you need to store those constants, use a different type, e.g. uint16:
const (
Starttrame1 = 0x10A
Starttrame2 = 0x10B
Starttrame3 = 0X10C
Starttrame4 = 0X10D
Starttrame5 = 0X10E
Starttrame6 = 0x10F
)
var data = [...]uint16{
Starttrame1, Starttrame2, Starttrame3, Starttrame4, Starttrame5, Starttrame6,
}
Try it on the Go Playground.

Swift Xcode6B3 - Byte Swapping - Undefined symbol "__OSSwapInt16"

I wanted to use the Swift method CFSwapInt16BigToHost but I can't get it linking. I link against the CoreFoundation framework, but every time I get the following error :
Undefined symbols for architecture i386:
"__OSSwapInt16", referenced from:
Did I miss something ?
Yes, for some reason the CFSwap... functions cannot be used in a Swift program.
But since Xcode 6 beta 3, all integer types have little/bigEndian: constructors
and little/bigEndian properties.
From the UInt16 struct definition:
/// Creates an integer from its big-endian representation, changing the
/// byte order if necessary.
init(bigEndian value: UInt16)
/// Creates an integer from its little-endian representation, changing the
/// byte order if necessary.
init(littleEndian value: UInt16)
/// Returns the big-endian representation of the integer, changing the
/// byte order if necessary.
var bigEndian: UInt16 { get }
/// Returns the little-endian representation of the integer, changing the
/// byte order if necessary.
var littleEndian: UInt16 { get }
Example:
// Data buffer containing the number 1 in 16-bit, big-endian order:
var bytes : [UInt8] = [ 0x00, 0x01]
let data = NSData(bytes: &bytes, length: bytes.count)
// Read data buffer into integer variable:
var i16be : UInt16 = 0
data.getBytes(&i16be, length: sizeofValue(i16be))
println(i16be) // Output: 256
// Convert from big-endian to host byte-order:
let i16 = UInt16(bigEndian: i16be)
println(i16) // Output: 1
Update: As of Xcode 6.1.1, the CFSwap... functions are available in Swift, so
let i16 = CFSwapInt16BigToHost(bigEndian: i16be)
let i16 = UInt16(bigEndian: i16be)
both work, with identical results.
It looks like those are handled with a combination of macro's and inline functions so... I don't know why it wouldn't be already statically compiled into the CF version:
generally to solve this kind of dependency riddle you can just search for the naked function name without prefixed underscores, then figure out where it should be linked from
#define OSSwapInt16(x) __DARWIN_OSSwapInt16(x)
then
#define __DARWIN_OSSwapInt16(x) \
((__uint16_t)(__builtin_constant_p(x) ? __DARWIN_OSSwapConstInt16(x) : _OSSwapInt16(x)))
then
__DARWIN_OS_INLINE
__uint16_t
_OSSwapInt16(
__uint16_t _data
)
{
return ((__uint16_t)((_data << 8) | (_data >> 8)));
}
I know this isn't a real answer but it was too big for a comment,
I think you may need to find out if there is a problem with the way that swift imports the headers... like if for instance the macro's to import headers isn't correct in a swift setup.
As the other answers have pointed out, the __OSSwapInt16 swap method does not appear to be in the swift header of the CFByte framework. I think the swift like alternative would be:
var dataLength: UInt16 = 24
var swapped = UInt16(dataLength).byteSwapped

Golang: declare a single constant

Which is the preferred way to declare a single constant in Go?
1)
const myConst
2)
const (
myConst
)
Both ways are accepted by gofmt. Both ways are found in stdlib, though 1) is used more.
The second form is mainly for grouping several constant declarations.
If you have only one constant, the first form is enough.
for instance archive/tar/reader.go:
const maxNanoSecondIntSize = 9
But in archive/zip/struct.go:
// Compression methods.
const (
Store uint16 = 0
Deflate uint16 = 8
)
That doesn't mean you have to group all constants in one const (): when you have constants initialized by iota (successive integer), each block counts.
See for instance cmd/yacc/yacc.go
// flags for state generation
const (
DONE = iota
MUSTDO
MUSTLOOKAHEAD
)
// flags for a rule having an action, and being reduced
const (
ACTFLAG = 1 << (iota + 2)
REDFLAG
)
dalu adds in the comments:
it can also be done with import, type, var, and more than once.
It is true, but you will find iota only use in a constant declaration, and that would force you to define multiple const () blocks if you need multiple sets of consecutive integer constants.

Can a breakpoint display the contents of "const unsigned char* variable"?

I'm on the trail of why the contents of a TXT record in a Bonjour service discovery is sometimes being incompletely interpreted, and I've reached a point where it would be really useful to have a breakpoint print out the contents of an unsigned char in a callback (I've tried NSLog, but using NSLog in a threaded callback can get really tricky).
The callback function is defined this way:
static void resolveCallback(DNSServiceRef sdRef, DNSServiceFlags flags, uint32_t interfaceIndex, DNSServiceErrorType errorCode,
const char* fullname, const char* hosttarget, uint16_t port, uint16_t txtLen,
const unsigned char* txtRecord, void* context) {
So I'm interested in the txtRecord
Right now my breakpoint is using:
memory read --size 4 --format x --count 4 `txtRecord`
But that's only because that was an example on the lldv.llvm.org example page ;-) It's certainly showing data that I expect to be there, partially.
Do I have to apply informed knowledge of the length or can the breakpoint be coded such that it uses the length that is present? I'm thinking that instead of "hard coding" the two 4s in the example there ought to be a way to wrap in other read instructions inside back ticks like I did with the variable name.
Looking at http://lldb.llvm.org/varFormats.html I thought I'd try a format of C instead of x but that prints out series of dots which must mean I picked a wrong format or something.
I just tried
memory read `txtRecord`
and that's almost exactly what I wanted to see as it gives:
0x1c5dd884: 10 65 6e 30 3d 31 39 32 2e 31 36 38 2e 31 2e 33 .en0=192.168.1.3
0x1c5dd894: 36 0a 70 6f 72 74 3d 35 30 32 37 38 00 00 00 00 6.port=50278....
This looks really close:
memory read `txtRecord` --format C
giving:
0x1d0c6974: .en0=192.168.1.36.port=50278....
If that's the best I can get, I guess I can deal with the length bytes in front of each of the two strings in that txtRecord.
I'm asking this question because I'd like to display the actual and correct values... the bug is that sometimes the IP address comes back wrong, losing the frontmost 1, other times the port comes back "short" (in network byte order) with non-numeric characters at the end, like "502¿" instead of "50278" (in this example run).
My initial response to this question, while informative, was not complete. I originally thought the problem being reported was just about printing a c-string array of type unsigned char * where the default formatters (char *) weren't being used. That answer comes first. Then comes the answer about how to print this (somewhat unique) array of pascal strings data that the program is actually dealing with.
First answer: lldb knows how to handle the char * well; it's the unsigned char * bit that is making it behave a little worse than usual. e.g. if txtRecord were a const char *,
(lldb) p txtRecord
(const char *) $0 = 0x0000000100000f51 ".en0=192.168.1.36.port=50278"
You can copy the type summary lldb has built in for char * for unsigned char *. type summary list lists all of the built in type summaries; copying lldb-179.5's summaries for char *:
(lldb) type summary add -p -C false -s ${var%s} 'unsigned char *'
(lldb) type summary add -p -C false -s ${var%s} 'const unsigned char *'
(lldb) fr va txtRecord
(const unsigned char *) txtRecord = 0x0000000100000f51 ".en0=192.168.1.36.port=50278"
(lldb) p txtRecord
(const unsigned char *) $2 = 0x0000000100000f51 ".en0=192.168.1.36.port=50278"
(lldb)
Of course you can put these in your ~/.lldbinit file and they'll be picked up by Xcode et al from now on.
Second answer: To print the array of pascal strings that this is actually using, you'll need to create a python function. It will take two arguments, the size of the pascal string buffer (txtLen) and the address of the start of the buffer (txtRecord). Create a python file like pstrarray.py (I like to put these in a directory I made, ~/lldb) and load it into your lldb via the ~/.lldbinit file so you have the command available:
command script import ~/lldb/pstrarray.py
The python script is a little long; I'm sure someone more familiar with python could express this more concisely. There's also a bunch of error handling which adds bulk. But the main idea is to take two parameters: the size of the buffer and the pointer to the buffer. The user will express these with variable names like pstrarray txtLen txtRecord, in which case you could look up the variables in the current frame, but they might also want to use an acutal expression like pstrarray sizeof(str) str. So we need to pass these parameters through the expression evaluation engine to get them down to an integer size and a pointer address. Then we read the memory out of the process and print the strings.
import lldb
import shlex
import optparse
def pstrarray(debugger, command, result, dict):
command_args = shlex.split(command)
parser = create_pstrarray_options()
try:
(options, args) = parser.parse_args(command_args)
except:
return
if debugger and debugger.GetSelectedTarget() and debugger.GetSelectedTarget().GetProcess():
process = debugger.GetSelectedTarget().GetProcess()
if len(args) < 2:
print "Usage: pstrarray size-of-buffer pointer-to-array-of-pascal-strings"
return
if process.GetSelectedThread() and process.GetSelectedThread().GetSelectedFrame():
frame = process.GetSelectedThread().GetSelectedFrame()
size_of_buffer_sbval = frame.EvaluateExpression (args[0])
if not size_of_buffer_sbval.IsValid() or size_of_buffer_sbval.GetValueAsUnsigned (lldb.LLDB_INVALID_ADDRESS) == lldb.LLDB_INVALID_ADDRESS:
print 'Could not evaluate "%s" down to an integral value' % args[0]
return
size_of_buffer = size_of_buffer_sbval.GetValueAsUnsigned ()
address_of_buffer_sbval = frame.EvaluateExpression (args[1])
if not address_of_buffer_sbval.IsValid():
print 'could not evaluate "%s" down to a pointer value' % args[1]
return
address_of_buffer = address_of_buffer_sbval.GetValueAsUnsigned (lldb.LLDB_INVALID_ADDRESS)
# If the expression eval didn't give us an integer value, try it again with an & prepended.
if address_of_buffer == lldb.LLDB_INVALID_ADDRESS:
address_of_buffer_sbval = frame.EvaluateExpression ('&%s' % args[1])
if address_of_buffer_sbval.IsValid():
address_of_buffer = address_of_buffer_sbval.GetValueAsUnsigned (lldb.LLDB_INVALID_ADDRESS)
if address_of_buffer == lldb.LLDB_INVALID_ADDRESS:
print 'could not evaluate "%s" down to a pointer value' % args[1]
return
err = lldb.SBError()
pascal_string_buffer = process.ReadMemory (address_of_buffer, size_of_buffer, err)
if (err.Fail()):
print 'Failed to read memory at address 0x%x' % address_of_buffer
return
pascal_string_array = bytearray(pascal_string_buffer, 'ascii')
index = 0
while index < size_of_buffer:
length = ord(pascal_string_buffer[index])
print "%s" % pascal_string_array[index+1:index+1+length]
index = index + length + 1
def create_pstrarray_options():
usage = "usage: %prog"
description='''print an buffer which has an array of pascal strings in it'''
parser = optparse.OptionParser(description=description, prog='pstrarray',usage=usage)
return parser
def __lldb_init_module (debugger, dict):
parser = create_pstrarray_options()
pstrarray.__doc__ = parser.format_help()
debugger.HandleCommand('command script add -f %s.pstrarray pstrarray' % __name__)
and an example program to run this on:
#include <stdio.h>
#include <stdint.h>
#include <string.h>
int main ()
{
unsigned char str[] = {16,'e','n','0','=','1','9','2','.','1','6','8','.','1','.','3','6',
10,'p','o','r','t','=','5','1','6','8','7'};
uint8_t *p = str;
while (p < str + sizeof (str))
{
int len = *p++;
char buf[len + 1];
strlcpy (buf, (char*) p, len + 1);
puts (buf);
p += len;
}
puts ("done"); // break here
}
and in use:
(lldb) br s -p break
Breakpoint 1: where = a.out`main + 231 at a.c:17, address = 0x0000000100000ed7
(lldb) r
Process 74549 launched: '/private/tmp/a.out' (x86_64)
en0=192.168.1.36
port=51687
Process 74549 stopped
* thread #1: tid = 0x1c03, 0x0000000100000ed7 a.out`main + 231 at a.c:17, stop reason = breakpoint 1.1
#0: 0x0000000100000ed7 a.out`main + 231 at a.c:17
14 puts (buf);
15 p += len;
16 }
-> 17 puts ("done"); // break here
18 }
(lldb) pstrarray sizeof(str) str
en0=192.168.1.36
port=51687
(lldb)
While it's cool that it's possible to do this in lldb, it's not as smooth as we'd like to see. If the size of the buffer and the address of the buffer were contained in a single object, struct PStringArray {uint16_t size; uint8_t *addr;}, that would work much better. You could define a type summary formatter for all variables of type struct PStringArray and no special commands would be required. You'd still need to write a python function, but it could get all the information it needed out of the object directly so it would disappear into the lldb type format system. You could just write (lldb) p strs and the custom formatter function would be called on strs to print all the strings in there.

Create an enum from a group of related constants in Go

What is preferred (or right) way to group large number of related constants in the Go language? For example C# and C++ both have enum for this.
const?
const (
pi = 3.14
foo = 42
bar = "hello"
)
There are two options, depending on how the constants will be used.
The first is to create a new type based on int, and declare your constants using this new type, e.g.:
type MyFlag int
const (
Foo MyFlag = 1
Bar
)
Foo and Bar will have type MyFlag. If you want to extract the int value back from a MyFlag, you need a type coersion:
var i int = int( Bar )
If this is inconvenient, use newacct's suggestion of a bare const block:
const (
Foo = 1
Bar = 2
)
Foo and Bar are perfect constants that can be assigned to int, float, etc.
This is covered in Effective Go in the Constants section. See also the discussion of the iota keyword for auto-assignment of values like C/C++.
My closest approach to enums is to declare constants as a type. At least you have some type-safety which is the major perk of an enum type.
type PoiType string
const (
Camping PoiType = "Camping"
Eatery PoiType = "Eatery"
Viewpoint PoiType = "Viewpoint"
)
It depends on what do you want to achieve by this grouping. Go allows grouping with the following braces syntax:
const (
c0 = 123
c1 = 67.23
c2 = "string"
)
Which just adds nice visual block for a programmer (editors allow to fold it), but does nothing for a compiler (you can not specify a name for a block).
The only thing that depends on this block is the iota constant declaration in Go (which is pretty handy for enums). It allows you to create auto increments easily (way more than just auto increments: more on this in the link).
For example this:
const (
c0 = 3 + 5 * iota
c1
c2
)
will create constants c0 = 3 (3 + 5 * 0), c1 = 8 (3 + 5 * 1) and c2 = 13 (3 + 5 * 2).

Resources