Should I always avoid lower case class names? - coding-style

The following is incompatible with the Dart Style Guide. Should I not do this? I'm extending the num class to build a unit of measure library for medical applications. Using lower case looks more elegant than Kg or Lbs. In some cases using lower case is recommended for safety i.e. mL instead of Ml.
class kg extends num {
String uom = "kg";
num _value;
lbs toLbs() => new lbs(2.20462 * _value);
}
class lbs extends num {
String uom = "lbs";
num _value;
kg toKg() => new kg(_value/2.20462);
}

For your case I might pick a unit (e.g. milligrams) and make other units multiples of it. You can use division for conversion:
const mg = 1; // The unit
const g = mg * 1000;
const kg = g * 1000;
const lb = mg * 453592;
main() {
const numPoundsPerKilogram = kg / lb;
print(numPoundsPerKilogram); // 2.20462...
const twoPounds = lb * 2;
const numGramsInTwoPounds = twoPounds / g;
print(numGramsInTwoPounds); // 907.184
}
It's best to make the unit small, so other units can be integer multiples of it (ints are arbitrary precision).

It's up to you if you use them.
There might be situations where it can be important.
One I can think of currently is when you have an open source project and you don't want to alienate potential contributors.
When you have no special reason stick with the guidelines, if you have a good reason deviate from it.

Don't use classes for units. Use them for quantities:
class Weight {
static const double LBS_PER_KG = 2.20462;
num _kg;
Weight.fromKg(this._kg);
Weight.fromLbs(lbs) {
this._kg = lbs / LBS_PER_KG;
}
get inKg => _kg;
get inLbs => LBS_PER_KG * _kg;
}
take a look at the Duration class for ideas.

Coding conventions serve to help you write quality, readable code. If you find that ignoring a certain convention helps to improve the readability, that is your decision.
However, code is rarely only seen by one pair of eyes. Other programmers will be confused when reading your code if it doesn't follow the style guide. Following coding conventions allows others to quickly dive into your code and easily understand what is going on. Of course, it is possible that you really will be the only one to ever view this code, in which case this is moot.
I would avoid deviating from the style guide in most cases, except where the advantage is very obvious. For your situation I don't the advantage outweighs the disadvantage.

I wouldn't really take those coding standards too seriously.
I think these guidelines are much more reasonable: less arbitrary, and more useful:
Java: Code Conventions
Javascript: http://javascript.crockford.com/code.html
There are many more you can choose from - pick whatever works best for you!
PS:
As long as you're talking about Dart, check out this article:
Google Dart to ultimately replace Javascript ... not!

Related

Removing mutability without losing speed

I have a function like this:
fun randomWalk(numSteps: Int): Int {
var n = 0
repeat(numSteps) { n += (-1 + 2 * Random.nextInt(2)) }
return n.absoluteValue
}
This works fine, except that it uses a mutable variable, and I would like to make everything immutable when possible, for better safety and readability. So I came up with an equivalent version that doesn't use any mutable variables:
fun randomWalk_seq(numSteps: Int): Int =
generateSequence(0) { it + (-1 + 2 * Random.nextInt(2)) }
.elementAt(numSteps)
.absoluteValue
This also works fine and produces the same results, but it takes 3 times longer.
I used the following way to measure it:
#OptIn(ExperimentalTime::class)
fun main() {
val numSamples = 100000
val numSteps = 15708
repeat(5) {
val randomWalkSamples: IntArray
val duration = measureTime {
randomWalkSamples = IntArray(numSamples) { randomWalk(numSteps) }
}
println(duration)
}
}
I know it's a bit hacky (I could have used JMH but this is just a quick test - at least I know that measureTime uses a monotonic clock). The results for the iterative (mutable) version:
2.965358406s
2.560777033s
2.554363661s
2.564279403s
2.608323586s
As expected, the first line shows it took a bit longer on the first run due to the warming up of the JIT, but the next 4 lines have fairly small variation.
After replacing randomWalk with randomWalk_seq:
6.636866719s
6.980840906s
6.993998111s
6.994038706s
7.018054467s
Somewhat surprisingly, I don't see any warmup time - the first line is always lesser duration than the following 4 lines, every time I run this. And also, every time I run it, the duration keeps increasing, with line 5 always being the greatest duration.
Can someone explain the findings, and also is there any way of making this function not use any mutable variables but still have performance that is close to the mutable version?
Your solution is slower for two main reasons: boxing and the complexity of the iterator used by generateSequence()'s Sequence implementation.
Boxing happens because a Sequence uses its types generically, so it cannot use primitive 32-bit Ints directly, but must wrap them in classes and unwrap them when retrieving the items.
You can see the complexity of the iterator by Ctrl+clicking the generateSequence function to view the source code.
#Михаил Нафталь's suggestion is faster because it avoids the complex iterator of the sequence, but it still has boxing.
I tried writing an overload of sumOf that uses IntProgression directly instead of Iterable<T>, so it won't use boxing, and that resulted in equivalent performance to your imperative code with the var. As you can see, it's inline and when put together with the { -1 + 2 * Random.nextInt(2) } lambda suggested by #Михаил Нафталь, then the resulting compiled code will be equivalent to your imperative code.
inline fun IntProgression.sumOf(selector: (Int) -> Int): Int {
var sum: Int = 0.toInt()
for (element in this) {
sum += selector(element)
}
return sum
}
Ultimately, I don't think you're buying yourself much in the way of code clarity by removing a single var in such a small function. I would say the sequence code is arguably harder to read. vars may add to code complexity in complex algorithms, but I don't think they do in such simple algorithms, especially when there's only one of them and it's local to the function.
Equivalent immutable one-liner is:
fun randomWalk2(numSteps: Int) =
(1..numSteps).sumOf { -1 + 2 * Random.nextInt(2) }.absoluteValue
Probably, even more performant would be to replace
with
so that you'll have one multiplication and n additions instead of n multiplications and (2*n-1) additions:
fun randomWalk3(numSteps: Int) =
(-numSteps + 2 * (1..numSteps).sumOf { Random.nextInt(2) }).absoluteValue
Update
As #Tenfour04 noted, there is no specific stdlib implementation for IntProgression.sumOf, so it's resolved to Iterable<T>.sumOf, which will add unnecessary overhead for int boxing.
So, it's better to use IntArray here instead of IntProgression:
fun randomWalk4(numSteps: Int) =
(-numSteps + 2 * IntArray(numSteps).sumOf { Random.nextInt(2) }).absoluteValue
Still encourage you to check this all with JMH
I think:"Removing mutability without losing speed" is wrong title .because
mutability thing comes to deal with the flow that program want to achieve .
you are using var inside function.... and 100% this var will not ever change from outside this function and that is mutability concept.
if we git rid off from var everywhere why we need it in programming ?

Why my D code is not performant as expected?

I am doing a benchmark test for my own fun! I have written a part of code in many programming languages and benchmark it using ab to see which is faster and how much. I know the method may not be so valid and can not be used as an evident to use some, but for my own information I am doing so. The other factor I want to know is how easy/difficult is writing the same sample in each language. I wrote the code in Python/Python(asyncio),Haskell,Go, Kotlin and D. I expcted the D port to be faster than Go (or at least equal in speed). But unfortunately my D code is much slower than Go. Here I put oth codes and please help me why the code is not fast as expected. Or am I wrong absolutely in my expectations?
import cbor;
import std.array : appender;
import std.format;
import std.json;
import vibe.vibe;
struct Location
{
float latitude;
float longitude;
float altitude;
float bearing;
}
RedisClient redis;
void main()
{
auto settings = new HTTPServerSettings;
redis = connectRedis("localhost", 6379);
settings.port = 8080;
settings.bindAddresses = ["::1", "127.0.0.1"];
listenHTTP(settings, &hello);
logInfo("Please open http://127.0.0.1:8080/ in your browser.");
runApplication();
}
void hello(HTTPServerRequest req, HTTPServerResponse res)
{
if (req.path == "/locations") {
immutable auto data = req.json;
immutable auto loc = deserializeJson!Location(data);
auto buffer = appender!(ubyte[])();
encodeCborAggregate!(Flag!"WithFieldName".yes)(buffer, loc);
auto db = redis.getDatabase(0);
db.set("Vehicle", cast(string) buffer.data);
res.writeBody("Ok");
}
}
And here is the Go
package main
import (
"github.com/kataras/iris"
"github.com/kataras/iris/context"
)
import "github.com/go-redis/redis"
import (
"bytes"
"github.com/2tvenom/cbor"
)
type Location struct {
Latitude float32 `json:"latitude"`
Longitude float32 `json:"longitude"`
Altitude float32 `json:"altitude"`
Bearing float32 `json:"bearing"`
}
func main() {
app := iris.New()
client := redis.NewClient(&redis.Options{Addr: "localhost:6379"})
app.Post("/locations", func(ctx context.Context) {
var loc Location
ctx.ReadJSON(&loc)
var buffTest bytes.Buffer
encoder := cbor.NewEncoder(&buffTest)
encoder.Marshal(loc)
client.Set("vehicle", buffTest.Bytes(), 0)
client.Close()
ctx.Writef("ok")
})
app.Run(iris.Addr(":8080"), iris.WithCharset("UTF-8"))
}
Using ab, Go results in about 4200 req/sec, while D about 2800 req/sec!
You're not just benchmarking Go vs D. You're also benchmarking your particular choice of non-standard Go and D libraries against each other: cbor, vibe, iris, etc. And you're benchmarking your particular implementation which can easily vary by 1000x in performance.
With this many variables, the raw benchmark numbers are pretty meaningless for comparing the performance of two languages. It's possible any one of those 3rd party libraries are causing a performance problem. Really you're comparing just those two particular programs. This is the core problem of trying to compare anything but trivial programs across languages: there's too many variables.
You can reduce the impact of some of these variables with performance profiling; in Go this would be go tool pprof. This will tell you what functions and lines are being called how many times and taking how much resources. With that you can find bottlenecks, places in the code which are consuming a lot of resources, and focus optimization efforts there.
As you do profile and optimization rounds for each version, you'll get closer to comparing real, optimized implementations. Or you'll have a better understanding of what each language and library does efficiently, and what they don't.
The problem of comparing languages is heavily influenced by the particular problem and the particular programmer. X programmers invariably find X to be the best language not because X is the best language, but because X programmers are their best when writing in X and probably chose a problem they're comfortable with. Because of this, there are a number of projects to crowd source the best implementation for each language.
The one which immediately comes to mind is The Computer Language Benchmarks Game. They do Go, but not D. Maybe you can add it?

Swift Explicit vs. Inferred Typing : Performance

I'm reading a tutorial about Swift (http://www.raywenderlich.com/74438/swift-tutorial-a-quick-start) and it preconized to not set type explicitly because it's more readable this way.
I do not really agree about this point but that's not the question. My question is : Is it more efficient, in terms of performance (compiler...) to set type explicitly ?
For example, would this : var hello: Int = 56 be more efficient than this : var tutorialTeam = 56
There is no difference in performance between code that uses explicit types and code which uses type inference. The compiled output is identical in each case.
When you omit the type the compiler simply infers it.
The very small differences observed in the accepted answer are just your usual micro benchmarking artefacts, and cannot be trusted!
Whether or not you include the explicit type is a matter of taste. In some contexts it might make your code more readable.
The only time it makes a difference to your code is when you want to specify a different type to the one which the compiler would infer. As an example:
var num = 2
The above code infers that num is an Int, due to it being initialised with an integer literal. However you can 'force' it to be a Double as follows:
var num: Double = 2
From my experience, there's been a huge performance impact in terms of compilation speed when using explicit vs inferred types. A majority of my slow compiling code has been resolved by explicitly typing variables.
It seems like the Swift compiler still has room for improvement in this area. Try benchmarking some of your projects and you'll see a big difference.
Here's an article I wrote on how to speed up slow Swift compile times and how to find out what is causing it.
Type inference will not affect performance in your given example. However, I did find that being specific about the Type in your Swift array does impact performance significantly.
For example, the method below shuffles an array of type Any.
class func shuffleAny(inout array: [Any]) {
for (var i = 0; i < array.count; i++) {
let currentObject: Any = array[i]
let randomIndex = Int(arc4random()) % array.count
let randomObject: Any = array[randomIndex]
array[i] = randomObject;
array[randomIndex] = currentObject
}
}
The above function is actually much slower than if I were to make this function take an array of Int instead like this
class func shuffleIntObjects(inout array: [Int]) {
for (var i = 0; i < array.count; i++) {
let currentObject: Int = array[i]
let randomIndex = Int(arc4random()) % array.count
let randomObject: Int = array[randomIndex]
array[i] = randomObject;
array[randomIndex] = currentObject
}
}
The function that uses [Any] clocked in at 0.537 seconds 3% STDEV for 1 million Int objects. And the function that uses [Int] clocked in at 0.181 seconds 2% STDEV for 1 million Int objects.
You can check out this repo (https://github.com/vsco/swift-benchmarks) which details a lot more interesting benchmarks in Swift. One of my favorite ones is that Swift generics perform very poorly with the test conditions mentioned above

When are numbers NOT Magic?

I have a function like this:
float_as_thousands_str_with_precision(value, precision)
If I use it like this:
float_as_thousands_str_with_precision(volts, 1)
float_as_thousands_str_with_precision(amps, 2)
float_as_thousands_str_with_precision(watts, 2)
Are those 1/2s magic numbers?
Yes, they are magic numbers. It's obvious that the numbers 1 and 2 specify precision in the code sample but not why. Why do you need amps and watts to be more precise than volts at that point?
Also, avoiding magic numbers allows you to centralize code changes rather than having to scour the code when for the literal number 2 when your precision needs to change.
I would propose something like:
HIGH_PRECISION = 3;
MED_PRECISION = 2;
LOW_PRECISION = 1;
And your client code would look like:
float_as_thousands_str_with_precision(volts, LOW_PRECISION )
float_as_thousands_str_with_precision(amps, MED_PRECISION )
float_as_thousands_str_with_precision(watts, MED_PRECISION )
Then, if in the future you do something like this:
HIGH_PRECISION = 6;
MED_PRECISION = 4;
LOW_PRECISION = 2;
All you do is change the constants...
But to try and answer the question in the OP title:
IMO the only numbers that can truly be used and not be considered "magic" are -1, 0 and 1 when used in iteration, testing lengths and sizes and many mathematical operations. Some examples where using constants would actually obfuscate code:
for (int i=0; i<someCollection.Length; i++) {...}
if (someCollection.Length == 0) {...}
if (someCollection.Length < 1) {...}
int MyRidiculousSignReversalFunction(int i) {return i * -1;}
Those are all pretty obvious examples. E.g. start and the first element and increment by one, testing to see whether a collection is empty and sign reversal... ridiculous but works as an example. Now replace all of the -1, 0 and 1 values with 2:
for (int i=2; i<50; i+=2) {...}
if (someCollection.Length == 2) {...}
if (someCollection.Length < 2) {...}
int MyRidiculousDoublinglFunction(int i) {return i * 2;}
Now you have start asking yourself: Why am I starting iteration on the 3rd element and checking every other? And what's so special about the number 50? What's so special about a collection with two elements? the doubler example actually makes sense here but you can see that the non -1, 0, 1 values of 2 and 50 immediately become magic because there's obviously something special in what they're doing and we have no idea why.
No, they aren't.
A magic number in that context would be a number that has an unexplained meaning. In your case, it specifies the precision, which clearly visible.
A magic number would be something like:
int calculateFoo(int input)
{
return 0x3557 * input;
}
You should be aware that the phrase "magic number" has multiple meanings. In this case, it specifies a number in source code, that is unexplainable by the surroundings. There are other cases where the phrase is used, for example in a file header, identifying it as a file of a certain type.
A literal numeral IS NOT a magic number when:
it is used one time, in one place, with very clear purpose based on its context
it is used with such common frequency and within such a limited context as to be widely accepted as not magic (e.g. the +1 or -1 in loops that people so frequently accept as being not magic).
some people accept the +1 of a zero offset as not magic. I do not. When I see variable + 1 I still want to know why, and ZERO_OFFSET cannot be mistaken.
As for the example scenario of:
float_as_thousands_str_with_precision(volts, 1)
And the proposed
float_as_thousands_str_with_precision(volts, HIGH_PRECISION)
The 1 is magic if that function for volts with 1 is going to be used repeatedly for the same purpose. Then sure, it's "magic" but not because the meaning is unclear, but because you simply have multiple occurences.
Paul's answer focused on the "unexplained meaning" part thinking HIGH_PRECISION = 3 explained the purpose. IMO, HIGH_PRECISION offers no more explanation or value than something like PRECISION_THREE or THREE or 3. Of course 3 is higher than 1, but it still doesn't explain WHY higher precision was needed, or why there's a difference in precision. The numerals offer every bit as much intent and clarity as the proposed labels.
Why is there a need for varying precision in the first place? As an engineering guy, I can assume there's three possible reasons: (a) a true engineering justification that the measurement itself is only valid to X precision, so therefore the display shoulld reflect that, or (b) there's only enough display space for X precision, or (c) the viewer won't care about anything higher that X precision even if its available.
Those are complex reasons difficult to capture in a constant label, and are probbaly better served by a comment (to explain why something is beng done).
IF the use of those functions were in one place, and one place only, I would not consider the numerals magic. The intent is clear.
For reference:
A literal numeral IS magic when
"Unique values with unexplained meaning or multiple occurrences which
could (preferably) be replaced with named constants." http://en.wikipedia.org/wiki/Magic_number_%28programming%29 (3rd bullet)

What are pros and cons of using same constant value in test method?

I am very new to TDD. I am reading TDD By Example and it says "never try to use the same constant to mean more than one thing" and it show an example of Plus() method.
In my opinion, there is no difference between Plus(1, 1) which uses same constant value and Plus(1, 2). I want to know what are pros and cons of using same constant value in test method?
I think you misinterprete that statement. What the author (imho) is trying to convey is that following code is a recipe for disaster.
const SomeRandomValue = 32;
...
// Plus testcase
Plus(SomeRandomValue, SomeRandomValue)
...
// Divide testcase
Divide(SomeRandomValue, SomeRandomValue)
You have two testcases reusing a none descriptive constant. There is no way to know that by changing SomeRandomValue to 0 your testsuite will fail.
A better naming would be something like
const AdditionValue = 32;
const DivisorValue = 32;
...
// Plus testcase
Plus(AdditionValue, AdditionValue)
...
// Divide testcase
Divide(DivisorValue, DivisorValue)
where it should be obvious as to what the constants are used for.You should not get to hung up on the idea of code reuse when creating testcases.
Or to put it in other words:
I don't see anything wrong with reusing the DivisorValue constant in multiple testcases > but there is definitly something wrong trying to shoehorn one value into a none descriptive variable just in the name of code reuse.
If you use the same value in your test - as in Plus(1, 1) - your code could work for the wrong reason. Here is an implementation of Plus that will pass such a test, but fail a test with different values.
public int Plus (int a, int b) {
return a + a;
}
A test that avoids this risk is a better test than one which lets errors like these slip through.

Resources