I have read several sources that discuss how to swap two numbers without using a third variable. These are a few of the most relevant:
How do you swap two integer variables without using any if conditions, casting, or additional variables?
Potential Problem in "Swapping values of two variables without using a third variable"
Swap two integers without using a third variable
http://www.geeksforgeeks.org/swap-two-numbers-without-using-temporary-variable/
I understand why it doesn't make sense to use the described methods in most cases: the code becomes cluttered and difficult to read and will usually execute more slowly than a solution utilizing a third "temp" variable. However, none of the questions I have found discuss any benefits of the two-variable methods in practice. Do they have any redeeming qualities or benefits(historical or contemporary), or are they only useful as obscure programming trivia?
At this point it's just a neat trick. Speed-wise if it makes sense though your compiler will recognize a normal swap and optimize it appropriately (but no guarantees that it will recognize weird xoring and optimize that appropriately).
Another strike against xor is that if one variable alias the other, xor’ing them will zero both out. Since you’ll have to check for and handle this condition, you’ll have extra code involved – probably by using the third variable method.
You could also try adding and subtracting values… except that you’d have to check for and handle overflow, which would involve more code (probably the third variable method). Multiplication and division have the same flaw, but more importantly, there’s the exquisite delight of representing fractions in binary (so this wouldn’t work in the first place).
Edit: D’oh, sorry for the thread necromancy… got so caught up in following links that I forgot to check the dates.
Related
Not long ago I read that the Commodore 64's BASIC interpreter contains a POS function which returns the current horizontal position of the cursor. Since then, I've noticed this idiosyncrasy in some other BASIC dialects, including Microsoft QBASIC, and even Roku's BrightScript, which is much more recent.
What I'm wondering is, why is this a thing? If the value of the argument isn't used, why even require it? My guess is that maybe really early on BASIC didn't support functions without arguments, and it's stuck around for whatever reason, probably compatibility. But that wouldn't explain why it's still a required argument.
Worth mentioning is QBASIC also includes CSRLEN, which returns the vertical position of the cursor, but it doesn't require/accept any arguments. This supports my idea that it came from "ancient times"—POS would have been a well-defined operation on the earliest terminals (teletypes), but CSRLEN wouldn't have made sense until later hardware.
I seem to recall (very vaguely) that the lookup table in which pos was placed was one where all functions had an argument (like sin or fre). To that end, pos used common code to ensure it had an argument, even though it was ignored.
The BASIC interpreter in the C64, being based on the (rather limited) 6502 CPU had to all sorts of wondrous tricks to allow all its functionality.
Now keep in mind that required reaching down in to my gray matter through 30-odd years of detritus. I suspect you'll get a more accurate(a) answer over at the retro-computing sister site.
(a) And probably more complete, to an almost painful degree :-)
I have a function in my program that generates random strings.
func randString(s []rune, l int) string
s is a slice of runes containing the possible characters in the string. I pass
in a rune slice of both capital and lowercase alphabetic characters. l
determines the length of the string. This works great. But I also need to
generate random hex strings for html color codes.
It seems all sources say that it's good programming practice to reuse code. So I
made another []rune that held [1-9a-f] and feed that into randString. That
was before I realized that the stdlib already inclues formatting verbs for int
types that suit me perfectly.
In practice, is it better to reuse my randString function or code a separate
(more efficient) function? I would generate a single random int and Sprintf it
rather than having to loop and generate 6 random ints which randString does.
1) If there is an exact solution in the standard library, you should like always choose to use that.
Because:
The standard library is tested. So it does what it says (or what we expect it to do). Even if there is a bug in it, it will be discovered (by you or by others) and will get fixed without your work/effort.
The standard library is written as idiomatic Go. Chances are it's faster even if it does a little more than what you need compared to the solution you could write.
The standard library is (or may) improve by time. Your program may get faster just because an implementation was improved in a new Go release without any effort from your part.
The solution is presented (which means it's ready and requires no time from you).
The standard library is well and widely known, so your code will be easier to understand by others and by you later on.
If you're already imported the package (or will in the near future), this means zero or minimal overhead as libraries are statically linked, so the function you need is already linked to your program (to the compiled executable binary).
2) If there is a solution provided by the standard library but it is a general solution to similar problems and/or offers more than what you need:
That means it's more likely not the optimal solution for you, as it may use more memory and/or work more slowly as your solution could be.
You need to decide if you're willing to sacrifice that little performance loss for the gains listed above. This also depends how and how many times you need to use it (e.g. if it's a one-time, it shouldn't matter, if it's in an endless loop called very frequently, it should be examined carefully).
3) And at the other end: you should avoid using a solution provided by the standard library if it wasn't designed to solve your problem...
If it just happens that its "side-effect" solves your problem: Even if the current implementation would be acceptable, if it was designed for something else, future improvements to it could render your usage of it completely useless or could even break it.
Not to mention it would confuse other developers trying to read, improve or use your code (you included, after a certain amount of time).
As a side note: this question is exactly about the function you're trying to create: How to generate a random string of a fixed length in golang? I've presented mutiple very efficient solutions.
This is fairly subjective and not go-specific but I think you shouldn't reuse code just for the sake of reuse. The more code you reuse the more dependencies you create between different parts of your app and as result it becomes more difficult to maintain and modify. Easy to understand and modify code is much more important especially if you work in a team.
For your particular example I would do the following.
If a random color is generated only once in your package/application then using fmt.Sprintf("#%06x", rand.Intn(256*256*256)) is perfectly fine (as suggested by Dave C).
If random colors are generated in multiple places I would create function func randColor() string and call it. Note that now you can optimize randColor implementation however you like without changing the rest of the code. For example you could have implemented randColor using randString initially and then switched to a more efficient implementation later.
I am a new at vhdl and i have to multiplication two unsigned vectors like we all did in high scool
so i wrote the program and it dose compile but the result is not good.
The logic looks ok but still it dose not work can any one help.
I could not get how to place code here so please see the image attached.
Thx
When writing VHDL you'll first and foremost need to think hardware. Even though various statements may look similar to what you know from other languages, many of these behave differently, as they are mapped to hardware and evaluated in parallel rather than sequentially.
For instance, for loops in VHDL do not iterate through the loop, but rather replicate the loop contents and evaluate all of these in parallel. So your idea of accumulating temp will not work, as all values of temp1 would be available at the same time instead of one after another.
The easy way of handling multiplication is to just use the * operator, as many synthesizers will pick this up and automatically instantiate the necessary hardware. I assume this is some form of exercise though, where you need to implement the functionality yourself - so just ditch the for loop and store the intermediate results in their own variable, and then add them all up in the end.
When creating a private or protected variable, method, class, etc., should it be commented with the documentation comment?
Yes! The comments are to help any developer - yourself included - when reviewing, maintaining or extending the code in future. Whether it's public/private shouldn't be an influencing factor, quite simply if you think something isn't clear enough without a comment, put one in.
(Of course the best documentation is clear self-documenting code in the first place)
Some people will no doubt tell you that nothing needs to be commented (and technically they are right in that comments have no effect on output). However, it's up to 'coding style' like you tagged it as. I personally always comment all variables in addition to giving them a descriptive name. Remember other people may want to work with your source, or you might want to in a years time, in which case it's worth the few seconds to document it while you still know what it does.
Definitely yes. When for example you find a bug in your code after like three months, with commenting it will be easier to recall what this code was supposed to do.
Commenting individual variables is occasionally helpful, but more often than not variables will have logical groupings that will be expected to uphold certain invariants. A comment describing how the group as a whole is supposed to behave will often be more useful than comments describing individual variables.
For example, if an EditablePolygon class in Java might contain four essential fields:
int[] xCoords;
int[] yCoords;
int numCoords;
int sharedPortion;
and expect to uphold the invariants that both arrays will always be the same length, and that length will be >= numCoords, and all coordinates of interest will be in array slots below numCoords. It may further specify that there may exist multiple EditablePolygon objects sharing the same arrays, provided that all but one such object has a sharedPortion greater than numCoords or equal to the array length, and that one object's sharePortion is no less than the numCoords value of any of the others [making a clone of a shape require a defensive copy unless a change is requested to part of the original which was shared with the clone, or to any part of the clone [which is entirely shared with the original].
Note that the most important things for the comments to document are (1) the array lengths may exceed the number of points, and (2) certain portions of the array may be shared. The first may be somewhat obvious from the code, but the second will likely be far less obvious. The field sharedPortion does have some meaning in isolation, but its meaning and purpose can really only be understood in relation to the other variables.
It's a good practice to document methods and Classes. Moreover javadocs for public methods should be more stressed as those act as reference manual for external objects. Similarly Javadoc could be beneficial for public variables, though i personally is not in favor of having comments for variables.
Is there a good coding technique that specifies how many lines a function should have ?
No. Lines of code is a pretty bad metric for just about anything. The exception is perhaps functions that have thousands and thousands of lines - you can be pretty sure those aren't well written.
There are however, good coding techniques that usually result in fewer lines of code per function. Things like DRY (Don't Repeat Yourself) and the Unix-philosophy ("Write programs that do one thing and do it well. Write programs to work together. Write programs to handle text streams, because that is a universal interface." from Wikipedia). In this case replace "programs" with "functions".
I don't think it matters, who is to say that once a functions lengths passes a certain number of lines it breaks a rule.
In general just code clean functions easy to use and reuse.
A function should have a well defined purpose. That is, try to create functions which does a single thing, either by doing the thing itself or by delegating work to a number of other functions.
Most functional compilers are excellent at inlining. Thus there is no inherent price to pay for breaking up your code: The compiler usually does a good job at deciding if a function call should really be one or if it can just inline the code right away.
The size of the function is less relevant though most functions in FP tend to be small, precise and to the point.
There is a McCabe metric of Cyclomatic Complexity which you might read about at this Wikipedia article.
The metric measures how many tests and loops are present in a routine. A rule of thumb might be that under 10 is a manageable amount of complexity while over 11 becomes more fault prone.
I have seen horrendous code that had a Complexity metric above 50. (It was error-prone and difficult to understand or change.) Re-writing it and breaking it down into subroutines reduced the complexity to 8.
Note the Complexity metric is usually proportional to the lines of code. It would provide you a measure on complexity rather than lines of code.
When working in Forth (or playing in Factor) I tend to continually refactor until each function is a single line! In fact, if you browse through the Factor libraries you'll see that the majority of words are one-liners and almost nothing is more than a few lines. In a language with inner-functions and virtually zero cost for calls (that is, threaded code implicitly having no stack frames [only return pointer stack], or with aggressive inlining) there is no good reason not to refractor until each function is tiny.
From my experience a function with a lot of lines of code (more than a few pages) is a nightmare to maintain and test. But having said that I don't think there is a hard and fast rule for this.
I came across some VB.NET code at my previous company that one function of 13 pages, but my record is some VB6 code I have just picked up that is approx 40 pages! Imagine trying to work out which If statement an Else belongs to when they are pages apart on the screen.
The main argument against having functions that are "too long" is that subdividing the function into smaller functions that only do small parts of the entire job improves readability (by giving those small parts actual names, and helping the reader wrap his mind around smaller pieces of behavior, especially when line 1532 can change the value of a variable on line 45).
In a functional programming language, this point is moot:
You can subdivide a function into smaller functions that are defined within the larger function's body, and thus not reducing the length of the original function.
Functions are expected to be pure, so there's no actual risk of line X changing the value read on line Y : the value of the line Y variable can be traced back up the definition list quite easily, even in loops, conditionals or recursive functions.
So, I suspect the answer would be "no one really cares".
I think a long function is a red flag and deserves more scrutiny. If I came across a function that was more than a page or two long during a code review I would look for ways to break it down into smaller functions.
There are exceptions though. A long function that consists of mostly simple assignment statements, say for initialization, is probably best left intact.
My (admittedly crude) guideline is a screenful of code. I have seen code with functions going on for pages. This is emetic, to be charitable. Functions should have a single, focused purpose. If you area trying to do something complex, have a "captain" function call helpers.
Good modularization makes friends and influences people.
IMHO, the goal should be to minimize the amount of code that a programmer would have to analyze simultaneously to make sense of a program. In general, excessively-long methods will make code harder to digest because programmers will have to look at much of their code at once.
On the other hand, subdividing methods into smaller pieces will only be helpful if those smaller pieces can be analyzed separately from the code which calls them. Splitting a method into sub-methods which would only be meaningful in the context where they are called is apt to impair rather than improve legibility. Even if before splitting the method would have been over 250 lines, breaking it into ten pieces which don't make sense in isolation would simply increase the simultaneous-analysis requirement from 250 lines to 300+ (depending upon how many lines are added for method headers, the code that calls them, etc.) When deciding whether a method should be subdivided, it's far more important to consider whether the pieces make sense in isolation, than to consider whether the method is "too long". Some 20-lines routine might benefit from being split into two ten-line routines and a two-line routine that calls them, but some 250-line routines might benefit from being left exactly as they are.
Another point which needs to be considered, btw, is that in some cases the required behavior of a program may not be a good fit with the control structures available in the language it's written in. Most applications have large "don't-care" aspects of their behavior, and it's generally possible to assign behavior that will fit nicely with a language's available control structures, but sometimes behavioral requirements may be impossible to meet without awkward code. In some such cases, confining the awkwardness to a single method which is bloated, but which is structured around the behavioral requirements, may be better than scattering it among many smaller methods which have no clear relationship to the overall behavior.