Avoid too much conversion - go

I have some parts in my current Go code that look like this:
i := int(math.Floor(float64(len(l)/4)))
The verbosity seems necessary because of some function type signatures like the one in math.Floor, but can it be simplified?

In general, the strict typing of Go leads to some verbose expressions. Verbose doesn't mean stuttering though. Type conversions do useful things and it's valuable to have those useful things explicitly stated.
The trick to simplification is to not write unneeded type conversions, and for that you need to refer to documentation such as the language definition.
In your specific case, you need to know that len() returns int, and further, a value >= 0. You need to know that 4 is a constant that will take on the type int in this expression, and you need to know that integer division will return the integer quotient, which in this case will be a non-negative int and in fact exactly the answer you want.
i := len(l)/4
This case is an easy one.

I'm not 100% sure how Go deals with integer division and integer conversion, but it's usually via truncation. Thus, assuming len(l) is an int
i:=len(l)/4
Otherwise i:= int(len(l))/4 or i:=int(len(l)/4) should work, with the first being theoretically slightly faster than the second.

Related

Ada random Integer in range of array length

It's a simple question, yet I can't find anything that could help me...
I want to create some random connection between graph nodes. To do this I want do two random indexes and then connect the nodes.
declare
type randRange is range 0..100000;
n1: randRange;
n2: randRange;
package Rand_Int is new ada.numerics.discrete_random(randRange);
use Rand_Int;
gen : Generator;
begin
n1 := random(gen) mod n; -- first node
n2 := random(gen) mod n;
I wanted to define the range with length of my array but I got errors. Still, it doesn't compile.
Also I can't perform modulo as n is natural.
75:15: "Generator" is not visible
75:15: multiple use clauses cause hiding
75:15: hidden declaration at a-nudira.ads:50, instance at line 73
75:15: hidden declaration at a-nuflra.ads:47
And I have no idea what these errors mean - obviously, something is wrong with my generator.
I would appreciate if someone showed me a proper way to do this simple thing.
As others have answered, the invisibility of Generator is due to you having several "use" clauses for packages all of which have a Generator. So you must specify "Rand_Int.Generator" to show that you want the Generator from the Rand_Int package.
The problem with the "non-static expression" happens because you try to define a new type randRange, and that means the compiler has to decide how many bits it needs to use for each value of the type, and for that the type must have compile-time, i.e. static, bounds. You can instead define it as a subtype:
subtype randRange is Natural range 0 .. n-1;
and then the compiler knows that it can use the same number of bits as it uses for the Natural type. (I assume here that "n" is an Integer, or Natural or Positive; otherwise, use whatever type "n" is.)
Using a subtype should also resolve the problem with the "expected type".
You don't show us the whole code neccessary to reproduce the errors, but the error messages suggest you have another use clause somewhere, a use Ada.Numerics.Float_Random;. Either remove that, or specify which generator you want, ie. gen : Rand_Int.Generator;.
As for mod, you should specify the exact range you want when instantiating Discrete_Random instead:
type randRange is 0..n-1; -- but why start at 0? A list of nodes is better decribed with 1..n
package Rand_Int is new ada.numerics.discrete_random(randRange);
Now, there's no need for mod
The error messages you mention have to do with concept of visibility in Ada, which differs from most other languages. Understanding visibility is key to understanding Ada. I recommend that beginners avoid use <package> in order to avoid the visibility issues involved with such use clauses. As you gain experience with the language you can experiment with using common pkgs such as Ada.Text_IO.
As you seem to come from a language in which arrays have to have integer indices starting from zero, I recommend Ada Distilled, which does an excellent job of describing visibility in Ada. It is ISO/IEC 8652:2007, but you should have no difficulty picking up Ada-12 from that basis.
If you're interested in the issues involved in obtaining a random integer value in a subrange of an RNG's result range, or from a floating-point random value, you can look at PragmARC.Randomness.Real_Ranges and PragmARC.Randomness.U32_Ranges in the PragmAda Reusable Components.

In Go, when should you use uint vs int?

At first glance, it seems like one might opt for uint when you need an int that you don't want to be negative. However, in practice it seems that int is nearly always preferred.
I see general recommendations like:
"Generally if you are working with integers you should just use the int type."
"uint should generally only be used for doing binary operations"
"Don't use unsigned types to enforce or suggest that a number must be positive. That's not what they're for."
"this is what The Go Programming Language recommends, with the specific example of uints being useful when you want to do bitwise operations"
I also noticed that Go will let you convert a negative int to uint and give some odd results:
x := -5
y := uint(x)
fmt.Println(y)
>> 18446744073709551611
So, my understanding is that I should always use int when dealing with whole numbers, regardless of sign, unless I find myself needing uint, and I'll know it when that's the case (I think???).
My questions:
Is this the right takeaway?
If so, why is this the case?
What's an example of when one should use uint? -- maybe a specific example, as opposed to "when doing binary operations", as I'm not sure I know what that means :)
Also, I'm asking specific to Go's implementation.
This answer is for C but it's relevant here.
Generally if you are working with integers you should just use the int type.
This is recommended generallly because most of the code that we "generally" encounter deals with type int. And it's also not generally required for you to choose between the use of an int and a uint type.
Don't use unsigned types to enforce or suggest that a number must be positive. That's not what they're for.
This is quite subjective. You can very well use it to keep your program and data type-safe and needn't be bothered with dealing with the occasional errors that come due to the case of a negative integer.
"this is what The Go Programming Language recommends, with the specific example of uints being useful when you want to do bitwise operations"
This looks vague. Please add the source for this, I would like to read up on it.
x := -5
y := uint(x)
fmt.Println(y)
>> 18446744073709551611
This is typical of a number of languages. Logic behind this is that when you convert an int type to a uint, the binary representation used for the int is kind of shoved into the uint type. In the end, everything is just an abstraction over binary.
For example, take a look at this code and it's output:
a := int64(-123)
byteSliceRev := *(*[8]byte)(unsafe.Pointer(&a)) // The byte slice representation we get is LTR in increasing order of significance
u := uint(a)
byteSliceRevU := *(*[8]byte)(unsafe.Pointer(&u))
byteSlice, byteSliceU := make([]byte, 8), make([]byte, 8)
for i := 0; i < 8; i++ {
byteSlice[i], byteSliceU[i] = byteSliceRev[7-i], byteSliceRevU[7-i]
}
fmt.Println(u)
// 18446744073709551493
fmt.Printf("%b\n", byteSlice)
// [11111111 11111111 11111111 11111111 11111111 11111111 11111111 10000101]
fmt.Printf("%b\n", byteSliceU)
// [11111111 11111111 11111111 11111111 11111111 11111111 11111111 10000101]
The byte representation of both the int64 type of -5 is the same as for uint type of 18446744073709551493.
So, my understanding is that I should always use int when dealing with whole numbers, regardless of sign, unless I find myself needing uint, and I'll know it when that's the case (I think???).
But isn't this more or less true of every code that "we" write.?!
Is this the right takeaway?
If so, why is this the case?
I hope I have answered these two questions. Feel free to ask me if you still have any doubts.
What's an example of when one should use uint? -- maybe a specific example, as opposed to "when doing binary operations", as I'm not sure I know what that means :)
Imagine a scenario in which you have a table in your database with a lot of entries with an integer for an id, which is always positive. If you store this data as an int one bit of every entry is effectively useless and when you scale this, you are losing a lot of space when you could have just used a uint and saved it. Similar scenario can be thought of while transmitting data, transmitting tons of integers to be precise. Also, uint has double the range for positive integers compared to their counterpart signed integers due to the extra bit, so it will take you longer to run out of numbers. Storage is cheap now so people generally ignore this supposedly minor gain.
The other usecase is type-safety. A uint can never be negative so if a part of your code is delicate to negative numbers, it can prove to be pretty handy. It's better to get the error before wasting resource on the data just to find out it's impermissible because it's negative.
Package Image uses uint and so crypto/tls, so when you use these packages you must use uint.
I use it logically at first but I don't fight about it and if it became an issue I use a practical approach.
like why using int for len()

In Golang, uint16 VS int, which is less cost?

I am using a 64Bit server. My golang program needs integer type.
SO, If I use uint16 and uint32 type in source code, does it cost more than use most regular int type?
I am considering both computing cost and developing cost.
For the vast majority of cases using int makes more sense.
Here are some reasons:
Go doesn't implicitly convert between the numeric types, even when you think it should. If you start using some unsigned type instead of int, you should expect to pepper your code with multiple type conversions, because of other libraries or APIs preferring not to bother with unsigned types, because of untyped constant numerical expressions returning int values, etc.
Unsigned types are more prone to underflowing than signed types, because 0 (an unsigned type's boundary value) is much more of a naturally occurring value in computer programs than, for example, -9223372036854775808.
If you want to use an unsigned type because it restricts the values that you can put in it, keep in mind that when you combine silent underflow and compile time-only constant propagation, you probably aren't getting the bargain you were looking for. For example, while you cannot convert the constant math.MinInt64 to a uint, you can easily convert an int variable with value math.MinInt64 to a uint. And arguably it's not a bad Go style to have an if check whether the value you're trying to assign is valid for your program.
Unless you are experiencing significant memory pressure and your value space is somewhere slightly over what a smaller signed type would offer you, I'd think that using int will be much more efficient even if only because of development cost.
And even then, chances are that either there's a problem somewhere else in your program's memory footprint, or a managed language like Go is not the best fit for your needs.

What is the difference between std::atoi() and std::stoi?

What is the difference between atoi and stoi?
I know,
std::string my_string = "123456789";
In order to convert that string to an integer, you’d have to do the following:
const char* my_c_string = my_string.c_str();
int my_integer = atoi(my_c_string);
C++11 offers a succinct replacement:
std::string my_string = "123456789";
int my_integer = std::stoi(my_string);
1). Are there any other differences between the two?
2). Efficiency and performance wise which one is better?
3). Which is safer to use?
1). Are there any other differences between the two?
I find std::atoi() a horrible function: It returns zero on error. If you consider zero as a valid input, then you cannot tell whether there was an error during the conversion or the input was zero. That's just bad. See for example How do I tell if the c function atoi failed or if it was a string of zeros?
On the other hand, the corresponding C++ function will throw an exception on error. You can properly distinguish errors from zero as input.
2). Efficiency and performance wise which one is better?
If you don't care about correctness or you know for sure that you won't have zero as input or you consider that an error anyway, then, perhaps the C functions might be faster (probably due to the lack of exception handling). It depends on your compiler, your standard library implementation, your hardware, your input, etc. The best way is to measure it. However, I suspect that the difference, if any, is negligible.
If you need a fast (but ugly C-style) implementation, the most upvoted answer to the How to parse a string to an int in C++? question seems reasonable. However, I would not go with that implementation unless absolutely necessary (mainly because of having to mess with char* and \0 termination).
3). Which is safer to use?
See the first point.
In addition to that, if you need to work with char* and to watch out for \0 termination, you are more likely to make mistakes. std::string is much easier and safer to work with because it will take care of all these stuff.

Difference between BOOST_CHECK_CLOSE and BOOST_CHECK_CLOSE_FRACTION?

Can anyone describe the difference in behavior between BOOST_CHECK_CLOSE and BOOST_CHECK_CLOSE_FRACTION? The documentation implies the that both macros treat their third parameter identically, which makes me suspect the documentation is wrong.
In particular, BOOST_CHECK_CLOSE_FRACTION gives me some odd looking results:
error in "...": difference between *expected{0} and *actual{-1.7763568394002506e-16} exceeds 9.9999999999999995e-07
Is there a gotcha because I expect a zero result? I've not been successful at reading through the underlying macro declarations. Please note BOOST_CHECK_SMALL isn't appropriate for my use case (comparing two vectors after a linear algebra operation).
According to this discussion, one (BOOST_CHECK_CLOSE) treats the third parameter as expressing a percent, while the other (BOOST_CHECK_CLOSE_FRACTION) treats it as expressing a fraction. So, .01 in the first should be equivalent to .0001 in the second.
Not certain if that explains your problem -- do you get the same odd result with BOOST_CHECK_CLOSE? I wouldn't be shocked if the 0 caused an issue -- but I don't have first hand experience with the macros.
Yes. Zero is not "close" to any value. You can use BOOST_CHECK_SMALL instead.
#Gennadiy: Zero can be close to any small value. :-) Relative differences grow arbitrarily large if the expected value is very close to zero.
Here is a workaround function I use to unit-test double values: if the expected value is very small or zero then I check the smallness of the observed value, otherwise I check closeness:
void dbl_check_close(
double expected, double observed,
double small, double pct_tol
) {
if (std::fabs(expected) < small) {
BOOST_CHECK_SMALL(observed, small);
} else {
BOOST_CHECK_CLOSE(expected, observed, pct_tol);
}
}
Of course it would be great to have a BOOST_CHECK_SMALL_OR_CLOSE macro that does this automatically. Gennadiy could perhaps talk to the author of Boost.Test ;-)

Resources