I am wondering about does it any difference the below code for performance or something? For example there is three variables and each one is when to use is defined.
bool myFunc()
{
string networkName;
if ( !Parse(example, XML_ATTRIBUTE_NAME, networkName) )
{
return false;
}
BYTE networkId;
if ( !Parse(example, XML_ATTRIBUTE_ID, networkId) )
{
return false;
}
string baudRate;
if ( !Parse(example, XML_ATTRIBUTE_BAUDRATE, baudRate) )
{
return false;
}
}
Does it any difference between above and below code for performance or something?
bool myFunc()
{
string networkName;
string baudRate;
BYTE networkId;
if ( !Parse(example, XML_ATTRIBUTE_NAME, networkName) )
{
return false;
}
if ( !Parse(example, XML_ATTRIBUTE_ID, networkId) )
{
return false;
}
if ( !Parse(example, XML_ATTRIBUTE_BAUDRATE, baudRate) )
{
return false;
}
}
Code Readability
The recommended practice is to put the declaration as close as possible to the first place where the variable is used. This also minimizes the scope.
From Steve McConnell's "Code Complete" book:
Ideally, declare and define each variable close to where it’s first
used. A declaration establishes a variable’s type. A definition assigns
the variable a specific value. In languages that support it, such as
C++ and Java, variables should be declared and defined close to where
they are first used. Ideally, each variable should be defined at the
same time it’s declared.
Nevertheless, few sources recommend placing declarations in the beginning of the block ({}).
From the obsolete Java Code Conventions:
Put declarations only at the beginning of blocks. (A block is any code
surrounded by curly braces "{" and "}".) Don't wait to declare
variables until their first use; it can confuse the unwary programmer
and hamper code portability within the scope.
Declaring variables only at the top of the function is considered bad practice. Place declarations in the most local blocks.
Performance
In fact, it depends. Declaring POD types should not affect performance at all: the memory for all local variables is allocated when you call the function (C, JavaScript, ActionScript...).
Remember that the compiler optimizes your code, so I guess non-POD types also wouldn't be a problem (C++).
Usually choosing the place to declare a variable is a premature optimization, so the performance is an unimportant point here because of its insignificant microscopic boost (or overhead). The major argument is still the code readability.
Additional Note
Before C99 (C language) standard, variables had to be declared in the beginning of the block.
Summarizing
Considering the above, the best approach (but still not mandatory) is to declare variable as close as possible to the place of its first usage, keeping the scope clean.
In general, it's just a matter of a code readability.
Related
// reflect/value.go
func ValueOf(i interface{}) Value {
if i == nil {
return Value{}
}
// TODO: Maybe allow contents of a Value to live on the stack.
// For now we make the contents always escape to the heap. It
// makes life easier in a few places (see chanrecv/mapassign
// comment below).
escapes(i)
The code above is the source code of Value.go in golang, and the comment above the escapes(i) shows that each time we call the ValueOf function, the i will escape to the heap, that's why? Namely, how to explain the It makes life easier in a few places?
I am still learning go, so I can't describe more, that's why a community wiki answer. But here's what excerpted note says (note above the chanrecv function):
Note: some of the noescape annotations below are technically a lie,
but safe in the context of this package. Functions like chansend
and mapassign don't escape the referent, but may escape anything
the referent points to (they do shallow copies of the referent).
It is safe in this package because the referent may only point
to something a Value may point to, and that is always in the heap
(due to the escapes() call in ValueOf).
Also see:
// Dummy annotation marking that the value x escapes,
// for use in cases where the reflect code is so clever that
// the compiler cannot follow.
func escapes(x interface{}) {
if dummy.b {
dummy.x = x
}
}
var dummy struct {
b bool
x interface{}
}
I hope, this will be helpful.
Code I'm exploring:
type Stack struct {
length int
values []int
}
func (s *Stack) Push(value int) {
// ...
}
func (s *Stack) Pop() int {
// ...
}
func (s *Stack) Length() int {
return s.length
}
Methods Push and Pop change the length field in Stack struct. And I wanted to hide this field from other files to prevent code like stack.length = ... (Manual length change). But I was need to have ability to read this field, so I added getter method - Length.
And my question is:
Shouldn't stack.Length() become slower than stack.length, because it is a function call? I have learnt assembler a bit and I know how many operations program should do to call a function. Have I understand right: By adding getter method stack.Length() I protected those who use my lib from bad usage but the cost of it - program's performance? This actually concerns not only Go.
Shouldn't stack.Length() become slower than stack.length, because it is a function call?
Objection! Assumes facts not in evidence.
Specifically:
Why do you think it is a function call? It looks like one, but actual Go compilers will often expand the code in line.
Why do you think a function call is slower than inline code? When measuring actual programs on actual computers, sometimes function calls are faster than inline code. It turns out the crucial part is usually whether the instructions being executed, and their operands, are already in the appropriate CPU caches. Sometimes, expanding functions inline makes the program run more slowly.
The compiler should do the inline expansion unless it makes the program run more slowly. How good the compiler is at pre- or post-detecting such slowdowns, if present, is a separate issue. In this particular case, given the function definition, the compiler is almost certain to just expand the function in line, as accessing stack.length will likely be one instruction, and calling a function will be one instruction, and deciding the tradeoff here will be easy.
I frequently saw (and sometimes written myself) constructs like:
if(A || B)
{
var sharedValue = ...;
if(A)
{
// do stuff with sharedValue
}
else
{
// do other stuff with sharedValue
}
}
Real example:
switch (e.Key)
{
/*
* Don't mind the switch, since it can be simply converted into:
* if(e.Key == Key.Left || e.Key == Key.Right) { ... }
*/
case Key.Left:
case Key.Right:
var container = containerFromData(data);
if (e.Key == Key.Left)
{
this.TryNavigateBackward(container);
}
else
{
this.TryNavigateForward(container);
}
}
I really feel like I'm missing something, so there must be a better (simpler, less verbose) way to describe that, but couldn't come up with an idea. This question maybe somewhat bound to the programming language one use (I'm currently in C#), but are there any constructs out there, being able to simplify the given example?
Note: I'm aware of the ternary conditional operator a ? b : c, but this (at least in C#) only works when retrieving values and putting them into variables. The example above wants to actually do different (maybe complex) things with a shared Value.
Since I don't see any other suggestions, I'll throw some out and see whether they're what you're looking for.
First, if OOP is on the table, inheritance can represent this sort of shared behavior. What you would do is encapsulate the shared, A-specific and B-specific behavior in classes Shared, ASpecific and BSpecific, where ASpecific and BSpecific inherit from Shared. Then, if either A or B, you spin up an instance of either ASpecific or BSpecific, respectively, and then treat it as an instance of Shared. If you have conditions C, D, etc. that don't use the shared thing, you'd have possibly another parent class called Base and you'd have CBase, DBase inheriting from Base, Shared inheriting from Base, and spin up an instance depending on the condition and treat the result as an instance of Base.
Second, you could use inversion of control by passing in A-specific and B-specific behaviors to a shared method when shared stuff is required. You could use OOP for this or pure functional programming. Going with the latter (since the former is similar to the above solution and maybe not as good), you'd have a shared function that takes function f as an argument. The function f would have a signature that requires the shared object be passed in. Then, if A, call shared with a function (pointer or anonymous inline) that does the A-specific stuff to the shared object passed into it; otherwise, if B, call shared with a function that does the B-specific stuff to the shared object passed into it.
If all you really want to avoid is nesting, you could also bring the if (A || B) { … } stuff out and make it spin up shared which is declared but not instantiated in a higher scope; then, later, check A and B separately and know that in those cases shared will have the required setup from earlier.
In the following example, why doesn't the move constructor get called in the construction of 'copy' inside fun, even though the 'src' argument of 'fun' is explicitly a rvalue reference and is only used in that construction?
struct Toy {
int data;
Toy(): data(0)
{
log("Constructed");
}
Toy(Toy const& src): data(src.data)
{
log("Copy-constructed");
}
Toy(Toy&& src): data(src.data)
{
log("Move-constructed");
}
};
Toy fun(Toy&& src)
{
Toy copy(src);
copy.data = 777;
return copy;
}
Toy toy(fun(Toy())); // LOG: Constructed Copy-constructed
While Bob && b is an rvalue reference, all named use of data after construction is using it as an lvalue.
So Bob&& b will only bind to rvalues, but when you use it it will not move.
The only ways to get an rvalue reference are:
A value without a name, such as a temporary return value or result of a cast.
Use of a local value variable in a simple return x; statement.
Explicitly casting to an rvalue, such as with std::move or std::forward.
This prevents data from being silently moved from on one line and then used on the next. It can help to think of rvalue as being 'I the programmer say this is not needed after this expression' at use, and 'only take things that are not needed afterwards' in function parameters. The temporary/return exceptions above are two spots the compiler can relatively safely guarantee this itself.
Finally, note that universal references (auto&& and T&&) look like rvalue references but sometimes are not.
As you know TCL has some mathematical functions such as sin, cos, and hypot that are called in expr command with () braces as follows:
puts [expr sin(1.57)]
Now how can I add a function using TCL library functions so that it was called exactly the same way, and was doing something that a certain proc defines.
I would like to clarify my question. Say there is a proc (string) as follows:
proc add { a b } { return [expr $a+$b] } ;# string of a proc
Also I have a TCL interpreter in my C++ code. Now I want get the string of a proc and runtime register a function called add into the tcl::mathfunc namespace (I guess I should use Tcl_CreateObjCommand) so that I could call the following:
puts [expr add(1.57, 1.43)]
How this can be done. Could you please write a simple example. I could not find any example in TCL documentation and in books as well which describe the usage of this command.
Creating a function from C isn't too hard. To do it, you've got to write an implementation of a command that will perform the operation, and register that implementation as a command in the correct namespace. (In 8.4 and before, functions were done with a separate interface that was quite a bit nastier to use; the mechanism was wholly overhauled in 8.5.)
Command Implementation
Note that the signature is defined, and the ignored parameter is not used here. (It's really a void * — great when you're wanting to do things like binding a command to an object — but it simply isn't needed for doing an addition.)
static int AddCmd(ClientData ignored, Tcl_Interp *interp, int objc,
Tcl_Obj *const objv[]) {
double x, y, sum;
/* First, check number of arguments: command name is objv[0] always */
if (objc != 3) {
Tcl_WrongNumArgs(interp, 1, objv, "x y");
return TCL_ERROR;
}
/* Get our arguments as doubles */
if ( Tcl_GetDoubleFromObj(interp, objv[1], &x) != TCL_OK ||
Tcl_GetDoubleFromObj(interp, objv[2], &y) != TCL_OK) {
return TCL_ERROR;
}
/* Do the real operation */
sum = x + y;
/* Pass the result out */
Tcl_SetObjResult(interp, Tcl_NewDoubleObj(sum));
return TCL_OK;
}
Don't worry about the fact that it's allocating a value here; Tcl's got a very high performance custom memory manager that makes that a cheap operation.
Command Registration
This is done usually inside an initialization function that is registered as part of a Tcl package definition or which is called as part of initialization of the overall application. You can also do it directly if you are calling Tcl_CreateInterp manually. Which you do depends on how exactly how you are integrating with Tcl, and that's quite a large topic of its own. So I'll show how to create an initialization function; that's usually a good start in all scenarios.
int Add_Init(Tcl_Interp *interp) {
/* Use the fully-qualified name */
Tcl_CreateObjCommand(interp, "::tcl::mathfunc::add", AddCmd, NULL, NULL);
return TCL_OK;
}
The first NULL is the value that gets passed through as the first (ClientData) parameter to the implementation. The second is a callback to dispose of the ClientData (or NULL if it needs no action, as here).
Doing all this from C++ is also quite practical, but remember that Tcl is a C library, so they have to be functions (not methods, not without an adapter) and they need C linkage.
To get the body of a procedure from C (or C++), by far the easiest mechanism is to use Tcl_Eval to run a simple script to run info body theCmdName. Procedure implementations are very complex indeed, so the interface to them is purely at the script level (unless you actually entangle yourself far more with Tcl than is really wise).