Rune vs byte ranging over string - go

According to https://blog.golang.org/strings and my testings, it looks like while we range a string, the characters we get are rune type, but if we get it by str[index], they will be byte type, why is it?

To the first level, the why is because that's how the language is defined. The String type tells us that:
A string value is a (possibly empty) sequence of bytes. The number of bytes is called the length of the string and is never negative. Strings are immutable: once created, it is impossible to change the contents of a string.
and:
A string's bytes can be accessed by integer indices 0 through len(s)-1.
Meanwhile, range is a clause you can insert into a for statement, and the specification says:
The expression on the right in the "range" clause is called the range expression, which may be ... [a] string ...
and:
For a string value, the "range" clause iterates over the Unicode code points in the string starting at byte index 0. On successive iterations, the index value will be the index of the first byte of successive UTF-8-encoded code points in the string, and the second value, of type rune, will be the value of the corresponding code point. If the iteration encounters an invalid UTF-8 sequence, the second value will be 0xFFFD, the Unicode replacement character, and the next iteration will advance a single byte in the string.
If you want to know why the language is defined that way, you really have to ask the definers themselves. However, note that if for ranged only over the bytes, you'd need to construct your own fancier loops to range over the runes. Given that for ... range does work through the runes, if you want to work through the bytes in string s instead, you can write:
for i := 0; i < len(s); i++ {
...
}
and easily access s[i] inside the loop. You can also write:
for i, b := range []byte(s) {
}
and access both index i and byte b inside the loop. (Conversion from string to []byte, or vice versa, can require a copy since []byte can be modified. In this case, though, the range does not modify it and the compiler can optimize away the copy. See icza's comment below or this answer to golang: []byte(string) vs []byte(*string).) So you have not lost any ability, just perhaps a smidgen of concision.

Just a quick and simple answer on why the language is defined this way.
Think about what a rune is. A rune represents a Unicode code point, which can be composed of multiple bytes and also have different representations depending on the encoding.
Now think what doing mystring[i] would mean if that returned a rune and not a byte. Since you cannot know the length of each rune without scanning the string, that operation would require scanning the whole string every single time, thus making array-like access take O(n) instead of O(1).
It would be very counter-intuitive for the users of the language if mystring[i] scanned the whole string every time, and also more complex for the language developers. This is why most programming languages (like Go, Rust, Python) differentiate between Unicode characters and bytes, and sometimes only support indexing on bytes.
Accessing a string one rune at a time is instead much simpler when iterating from the beginning of it, like for example using range. Consecutive bytes can be scanned and grouped together until they form a valid Unicode character that can be returned as a rune, moving on to the next one.

Just letting you know. If you want to iterate with a classic for loop over a string and using operator [] to get the rune, you can do:
{
rstr := []rune(MyString)
for idx := 0; idx < len(rstr); idx++ {
// code before...
currentRune := rstr[idx]
_ = currentRune // to avoid unused error
// code after...
}
}

Related

What happens when I range over an uninitialized pointer to array in golang

I have this code
var j *[33]byte
for i := range j {
fmt.Println(j[i])
}
Now when I run this code I get nil pointer dereference error when I try access values in j. I'm not sure why I was even able to enter the loop in the first place considering my pointer is uninitialized.
I know an uninitialized array has all its values set to their zero value. That is
var a [5]int
Will have a default value of [0, 0, 0, 0, 0].
But I don't understand what golang does when you don't initialize a pointer to an array. Why is range able to range over it even though its nil?
From the Go spec Range Clause:
... For an array, pointer to array, or slice value a, the index
iteration values are produced in increasing order...
so as a convenience the Go language is dereferencing the pointer with the intent to iterating over its elements. The fact that the pointer is nil is a simple programming error. If this can occur, one should have a runtime check in place to guard against it.
Static analysis may be able to detect this type of bug ahead of time - but what if the variable j is accessible from another goroutine - how would the compiler know for sure that another goroutine may update it to a non-nil value right before the range loop is reached?
Go has a zero value defined for each type when you initialize a variable with var keyword (this may change when using :=, ideally used when need copies of values or specific values). In the case of the pointer the zero value is nil (also maps, interfaces, channels, slices, and functions) in case of array of type int the zero value is 0.
So, to answer your question, Go is able to iterate because you have 33 valid spaces idependently of what value is inside of that position. You can check the diference between slices and arrays on the Golang documentation to have more insights on why is that.

Why converting the same value in two ways, the results are different?

Two kinds of conversion of the same constant to float64 return the same value, but when I try to convert these new values to int, the results are different.
...
const Big = 92233720368547758074444444
func needFloat(x float64) float64 {
return x
}
func main() {
fmt.Println(needFloat(Big))
fmt.Println(float64(Big))
fmt.Println(int(needFloat(Big)))
fmt.Println(int(float64(Big)))
}
I'd expect the two first Println return the same type of value
fmt.Println(needFloat(Big)) // 9.223372036854776e+25
fmt.Println(float64(Big)) // 9.223372036854776e+25
so when I convert them to int, I expect the same output, but:
fmt.Println(int(needFloat(Big))) // -2147483648
fmt.Println(int(float64(Big))) // constant 92233720368547758080000000 overflows int
If your real question is why one attempt to convert to int produces a compile-time error message, but the other produces a very negative integer, it's because one is a compile-time conversion, and the other is a runtime conversion. I think it helps in these cases to be explicit about what you are expecting, and what can be run and what can't. Here's a Go Playground version of your code, where the last conversion is commented out. The reason for commenting it out is of course that it doesn't compile.
As Adrian noted in a comment, Big is a constant, specifically an untyped one. As Uvelichitel answered, a constant x (of any type) can be converted to a new and different type T if and only if
x is representable by a value of type T.
(The quote part is from the section Uvelichitel linked, except that mine adds the inner link for the word "representable".)
The expression float64(Big) is an explicit type conversion, with a constant as its x, so the result is a float64-typed constant with the given value. So far, that's fine: now we have 92233720368547758074444444 as a float64. This chops off some of the digits: the actual internal representation is 92233720368547758080000000 (see variant with %f directives). The low digits, ...74444444, have been rounded to ...80000000. See the link for "representable" for why the rounding occurs.
The expression int(float64(Big)) is an outer explicit type conversion surrounding an inner explicit type conversion. We already know what the inner type conversion does: it produces the float64 constant 92233720368547758080000000.0. The outer conversion tries to represent this new value as int, but it does not fit, producing an error:
./prog.go:18:17: constant 92233720368547758080000000 overflows int
if the commented-out line is uncommented. Note again that the value has been rounded, due to the inner conversion.
On the other hand, needFloat(Big) is a function call. Calling the function assigns the untyped constant to its argument (a float64) and obtains its return value (the same float64, value 92233720368547758080000000.0. Printing that prints what you'd expect, given the default or explicit formatting directive. The returned value is not a constant.
Similarly, int(needFloat(Big)) calls needFloat, which returns the same float64 value—not a constant—as before. The int explicit type conversion tries to convert this value to int at runtime, rather than at compile time. For such conversions between numeric types, there is a list of three explicit rules at https://golang.org/ref/spec#Conversions, plus a final caveat. Here, rule 2 applies: any fractional part is discarded. But the caveat also applies:
In all non-constant conversions involving floating-point or complex values, if the result type cannot represent the value the conversion succeeds but the result value is implementation-dependent.
In other words, there is no runtime error, but the int value you get—which in this case was -2147483648, which is the smallest allowed 32-bit integer—is up to the implementation. This particular implementation chose to use this particular negative number as its result. Another implementation might choose some other number. (Interestingly, in the playground, if I convert directly to uint I get zero. If I convert to int, then to uint, I get the 0x80000000 I expected.)
Hence, the key difference in terms of whether you get an error is whether you do the conversion at compile time, via constants, or at runtime, via runtime conversion.
int(float64(Big)) //illegal because
A constant value x can be converted to type T if x is representable by
a value of T
int(needFloat(Big)) //is non-constant expression because of function call
A non-constant value x can be converted to type T in any of these
cases:
- x's type and T are both integer or floating point types.
https://golang.org/ref/spec#Conversions

How to Define a Constant Value of a User-defined Type in Go?

I am implementing a bit-vector in Go:
// A bit vector uses a slice of unsigned integer values or “words,”
// each bit of which represents an element of the set.
// The set contains i if the ith bit is set.
// The following program demonstrates a simple bit vector type with these methods.
type IntSet struct {
words []uint64 //uint64 is important because we need control over number and value of bits
}
I have defined several methods (e.g. membership test, adding or removing elements, set operations like union, intersection etc.) on it which all have a pointer receiver. Here is one such method:
// Has returns true if the given integer is in the set, false otherwise
func (this *IntSet) Has(m int) bool {
// details omitted for brevity
}
Now, I need to return an empty set that is a true constant, so that I can use the same constant every time I need to refer to an IntSet that contains no elements. One way is to return something like &IntSet{}, but I see two disadvantages:
Every time an empty set is to be returned, a new value needs to be allocated.
The returned value is not really constant since it can be modified by the callers.
How do you define a null set that does not have these limitations?
If you read https://golang.org/ref/spec#Constants you see that constants are limited to basic types. A struct or a slice or array will not work as a constant.
I think that the best you can do is to make a function that returns a copy of an internal empty set. If callers modify it, that isn't something you can fix.
Actually modifying it would be difficult for them since the words inside the IntSet are lowercase and therefore private. If you added a value next to words like mut bool you could add a if mut check to every method that changes the IntSet. If it isn't mutable, return an error or panic.
With that, you could keep users from modifying constant, non-mutable IntSet values.

Simple way of getting key depending on value from hashmap in Golang

Given a hashmap in Golang which has a key and a value, what is the simplest way of retrieving the key given the value?
For example Ruby equivalent would be
key = hashMap.key(value)
There is no built-in function to do this; you will have to make your own. Below is an example function that will work for map[string]int, which you can adapt for other map types:
func mapkey(m map[string]int, value int) (key string, ok bool) {
for k, v := range m {
if v == value {
key = k
ok = true
return
}
}
return
}
Usage:
key, ok := mapkey(hashMap, value)
if !ok {
panic("value does not exist in map")
}
The important question is: How many times will you have to look up the value?
If you only need to do it once, then you can iterate over the key, value pairs and keep the key (or keys) that match the value.
If you have to do the look up often, then I would suggest you make another map that has key, values reversed (assuming all keys map to unique values), and use that for look up.
I am in the midst of working on a server based on bitcoin and there is a list of constants and byte codes for the payment scripts. In the C++ version it has both identifiers with the codes and then another function that returns the string version. So it's really not much extra work to just take the original, with opcodes as string keys and the byte as value, and then reverse the order. The only thing that niggles me is duplicate keys on values. But since those are just true and false, overlapping zero and one, all of the first index of the string slice are the numbers and opcodes, and the truth values are the second index.
To iterate the list every time to identify the script command to execute would cost on average 50% of the map elements being tested. It's much simpler to just have a reverse lookup table. Executing the scripts has to be done maybe up to as much as 10,000 times on a full block so it makes no sense to save memory and pay instead in processing.

How to find all brotherhood strings?

I have a string, and another text file which contains a list of strings.
We call 2 strings "brotherhood strings" when they're exactly the same after sorting alphabetically.
For example, "abc" and "cba" will be sorted into "abc" and "abc", so the original two are brotherhood. But "abc" and "aaa" are not.
So, is there an efficient way to pick out all brotherhood strings from the text file, according to the one string provided?
For example, we have "abc" and a text file which writes like this:
abc
cba
acb
lalala
then "abc", "cba", "acb" are the answers.
Of course, "sort & compare" is a nice try, but by "efficient", i mean if there is a way, we can determine a candidate string is or not brotherhood of the original one after one pass processing.
This is the most efficient way, i think. After all, you can not tell out the answer without even reading candidate strings. For sorting, most of the time, we need to do more than 1 pass to the candidate string. So, hash table might be a good solution, but i've no idea what hash function to choose.
Most efficient algorithm I can think of:
Set up a hash table for the original string. Let each letter be the key, and the number of times the letter appears in the string be the value. Call this hash table inputStringTable
Parse the input string, and each time you see a character, increment the value of the hash entry by one
for each string in the file
create a new hash table. Call this one brotherStringTable.
for each character in the string, add one to a new hash table. If brotherStringTable[character] > inputStringTable[character], this string is not a brother (one character shows up too many times)
once string is parsed, compare each inputStringTable value with the corresponding brotherStringTable value. If one is different, then this string is not a brother string. If all match, then the string is a brother string.
This will be O(nk), where n is the length of the input string (any strings longer than the input string can be discarded immediately) and k is the number of strings in the file. Any sort based algorithm will be O(nk lg n), so in certain cases, this algorithm is faster than a sort based algorithm.
Sorting each string, then comparing it, works out to something like O(N*(k+log S)), where N is the number of strings, k is the search key length, and S is the average string length.
It seems like counting the occurrences of each character might be a possible way to go here (assuming the strings are of a reasonable length). That gives you O(k+N*S). Whether that's actually faster than the sort & compare is obviously going to depend on the values of k, N, and S.
I think that in practice, the cache-thrashing effect of re-writing all the strings in the sorting case will kill performance, compared to any algorithm that doesn't modify the strings...
iterate, sort, compare. that shouldn't be too hard, right?
Let's assume your alphabet is from 'a' to 'z' and you can index an array based on the characters. Then, for each element in a 26 element array, you store the number of times that letter appears in the input string.
Then you go through the set of strings you're searching, and iterate through the characters in each string. You can decrement the count associated with each letter in (a copy of) the array of counts from the key string.
If you finish your loop through the candidate string without having to stop, and you have seen the same number of characters as there were in the input string, it's a match.
This allows you to skip the sorts in favor of a constant-time array copy and a single iteration through each string.
EDIT: Upon further reflection, this is effectively sorting the characters of the first string using a bucket sort.
I think what will help you is the test if two strings are anagrams. Here is how you can do it. I am assuming the string can contain 256 ascii characters for now.
#define NUM_ALPHABETS 256
int alphabets[NUM_ALPHABETS];
bool isAnagram(char *src, char *dest) {
len1 = strlen(src);
len2 = strlen(dest);
if (len1 != len2)
return false;
memset(alphabets, 0, sizeof(alphabets));
for (i = 0; i < len1; i++)
alphabets[src[i]]++;
for (i = 0; i < len2; i++) {
alphabets[dest[i]]--;
if (alphabets[dest[i]] < 0)
return false;
}
return true;
}
This will run in O(mn) if you have 'm' strings in the file of average length 'n'
Sort your query string
Iterate through the Collection, doing the following:
Sort current string
Compare against query string
If it matches, this is a "brotherhood" match, save it/index/whatever you want
That's pretty much it. If you're doing lots of searching, presorting all of your collection will make the routine a lot faster (at the cost of extra memory). If you are doing this even more, you could pre-sort and save a dictionary (or some hashed collection) based off the first character, etc, to find matches much faster.
It's fairly obvious that each brotherhood string will have the same histogram of letters as the original. It is trivial to construct such a histogram, and fairly efficient to test whether the input string has the same histogram as the test string ( you have to increment or decrement counters for twice the length of the input string ).
The steps would be:
construct histogram of test string ( zero an array int histogram[128] and increment position for each character in test string )
for each input string
for each character in input string c, test whether histogram[c] is zero. If it is, it is a non-match and restore the histogram.
decrement histogram[c]
to restore the histogram, traverse the input string back to its start incrementing rather than decrementing
At most, it requires two increments/decrements of an array for each character in the input.
The most efficient answer will depend on the contents of the file. Any algorithm we come up with will have complexity proportional to N (number of words in file) and L (average length of the strings) and possibly V (variety in the length of strings)
If this were a real world situation, I would start with KISS and not try to overcomplicate it. Checking the length of the target string is simple but could help avoid lots of nlogn sort operations.
target = sort_characters("target string")
count = 0
foreach (word in inputfile){
if target.len == word.len && target == sort_characters(word){
count++
}
}
I would recommend:
for each string in text file :
compare size with "source string" (size of brotherhood strings should be equal)
compare hashes (CRC or default framework hash should be good)
in case of equity, do a finer compare with string sorted.
It's not the fastest algorithm but it will work for any alphabet/encoding.
Here's another method, which works if you have a relatively small set of possible "letters" in the strings, or good support for large integers. Basically consists of writing a position-independent hash function...
Assign a different prime number for each letter:
prime['a']=2;
prime['b']=3;
prime['c']=5;
Write a function that runs through a string, repeatedly multiplying the prime associated with each letter into a running product
long long key(char *string)
{
long long product=1;
while (*string++) {
product *= prime[*string];
}
return product;
}
This function will return a guaranteed-unique integer for any set of letters, independent of the order that they appear in the string. Once you've got the value for the "key", you can go through the list of strings to match, and perform the same operation.
Time complexity of this is O(N), of course. You can even re-generate the (sorted) search string by factoring the key. The disadvantage, of course, is that the keys do get large pretty quickly if you have a large alphabet.
Here's an implementation. It creates a dict of the letters of the master, and a string version of the same as string comparisons will be done at C++ speed. When creating a dict of the letters in a trial string, it checks against the master dict in order to fail at the first possible moment - if it finds a letter not in the original, or more of that letter than the original, it will fail. You could replace the strings with integer-based hashes (as per one answer regarding base 26) if that proves quicker. Currently the hash for comparison looks like a3c2b1 for abacca.
This should work out O(N log( min(M,K) )) for N strings of length M and a reference string of length K, and requires the minimum number of lookups of the trial string.
master = "abc"
wordset = "def cba accb aepojpaohge abd bac ajghe aegage abc".split()
def dictmaster(str):
charmap = {}
for char in str:
if char not in charmap:
charmap[char]=1
else:
charmap[char] += 1
return charmap
def dicttrial(str,mastermap):
trialmap = {}
for char in str:
if char in mastermap:
# check if this means there are more incidences
# than in the master
if char not in trialmap:
trialmap[char]=1
else:
trialmap[char] += 1
else:
return False
return trialmap
def dicttostring(hash):
if hash==False:
return False
str = ""
for char in hash:
str += char + `hash[char]`
return str
def testtrial(str,master,mastermap,masterhashstring):
if len(master) != len(str):
return False
trialhashstring=dicttostring(dicttrial(str,mastermap))
if (trialhashstring==False) or (trialhashstring != masterhashstring):
return False
else:
return True
mastermap = dictmaster(master)
masterhashstring = dicttostring(mastermap)
for word in wordset:
if testtrial(word,master,mastermap,masterhashstring):
print word+"\n"

Resources