I just discovered the problem doing arithmetic using vars with leading 0's. I found the solution for setting individual vars to decimal using:
N=016
N=$((10#$N)) # force decimal (base 10)
echo $((N + 2))
# result is 18, ok
But I have multiple vars in my script that may or may not take a leading zero when run. I wonder if there is a global option that can be set to specify that all numbers in the script are to be interpreted as decimal? Or would there be a potential problem with doing so that I perhaps did not take into account?
I thought the set command might have such an option but after referring to the man page I did not read anything that looked like it would do the job.
As far as I can tell, this is an (unfortunate) convention established by the B language than a leading 0 introduces an octal number.
By looking at the bash sources, it seems that this convention is hard-coded in several places (lib/sh/strtol.c, builtins/common.c and concerning that specific case in expr.c, function strlong). So to answer to your question, no there isn't a global option to set all numbers as decimal.
If you have number in base 10 potentially prefixed by 0 you want perform calculation on, you might use the ${N#0} notation to refer to them.
sh$ N=010
sh$ echo $((${N#0}+0))
10
I don't know if this is more readable, or even less error prone_ than the solution you proposed in your question, though.
Related
I recently used the $RANDOM variable and I was truly curious about the under-the-hood implementation of it: the syntax says it's a variable but the behavior says it's like a function as it returns a different value each time it's called.
This is not "in Unix shells"; this is a Bash-specific feature.
It's not hard to guess what's going on under the hood; the shell special-cases this variable so that each attempt to read it instead fetches two bytes from a (pseudo-) random number generator.
To see the definition, look at get_random in variables.c (currently around line 1363).
about the under-the-hood implementation of it
There are some special "dynamic variables" with special semantics - $PWD $HOME $LINENO etc. When bash gets the value of the variable, it executes a special function.
RANDOM "variable" is setup here bash/variables.c and get_random() just sets the value of the variable, taking random from a simple generator implementation in bash/random.c.
My script currently accepts ActiveSupport date string as a command line argument:
my_script --mindate 1.day
Inside my script I am using eval to store it into my config
MyScript.configuration.min_date = eval(min_date_string)
I understand that this is extremely dodgy and insecure as anything can be passed to eval, but what are my alternatives?
You want time durations? I suppose you could use chronic_duration.
my_script --mindate "1 day"
MyScript.configuration.min_date = ChronicDuration.parse(min_date_string)
But since it's natural language heuristics, it becomes not entirely well defined exactly what sorts of strings it will recognize. But it will do fancy things like "1 day and four hours".
Or you could write your own very simple parser/interpreter for the argument. Just split on a space (for "1 day" type of input) or a period (for "1.day") type of input. Recognize a few words in the second position ("hour", "minute" "day", "month", "year"), translate them to seconds, multiply the number by the translated-to-seconds word. A dozen or so lines of ruby probably.
Or you could even take advantage of the ActiveSupport feature that supports things like "1.day" to make it even easier.
str = "11 hours"
number, unit = str.split(' ')
number.to_i.send(unit)
That would let the command line user send any method they want to a number. I'm not sure it matters. For that matter, I'm not sure if the original eval really matters or not -- but I agree with you it's bad practice. For that matter, probably so is send on user input, although not quite as bad.
Or you could just make them send in the raw number of seconds and calulcate it themselves.
my_script --mindate 86400
You realize 1.day just ends up being converted to the number of seconds in a standard day, right? I'm not sure why you're calling a number of seconds "mindate", but that's your business!
edit Or yet another alternative, make them do:
my_script --mindays 2 --minhours 4 --minminutes 3
or something.
How is your script being called? Is this always going to be called by a user with an account on whatever machine is running it? Is this somehow going to get called by a web service, or in a way that someone without access to the machine would be able to call it remotely with their own arguments?
If it's only going to be called by users, and those users already have access to the ruby command or irb, then you're not enabling them to do anything that they can't already do by calling eval.
If it's called remotely, you should probably not be using eval. Or, a quick and dirty solution could be to match patterns that you will eval with a regex. Something like /^[0-9]+(\.[a-z_0-9]+){,2}$/ would ensure that it's within 2 method calls of a Fixnum literal before evaluating.
I am sure that this has been asked, but I can't find it through my rudimentary searches.
Is it discouraged to perform operations within string initializations?
> increment = 4
=> 4
> "Incremented from #{increment} to #{increment += 1}"
=> "Incremented from 4 to 5"
I sure wouldn't, because that's not where you look for things-that-change-things when reading code.
It obfuscates intent, it obscures meaning.
Compare:
url = "#{BASE_URL}/#{++request_sequence}"
with:
request_sequence += 1
url = "#{BASE_URL}/#{request_sequence}"
If you're looking to see where the sequence number is coming from, which is more obvious?
I can almost live with the first version, but I'd be likely to opt for the latter. I might also do this instead:
url = build_url(++request_sequence)
In your particular case, it might be okay, but the problem is that the position where the operation on the variable should happen must be the last instance of the same variable in the string, and you cannot always be sure about that. For example, suppose (for some stylistic reason), you want to write
"Incremented to #{...} from #{...}"
Then, all of a sudden, you cannot do what you did. So doing operation during interpolation is highly dependent on the particular phrasing in the string, and that decreases maintainability of the code.
In bash I am trying to code a conditional with numbers that are decimals (with fractions). Then I found out that I cannot do decimals in bash.
The script that I have is as follows:
a=$(awk '/average TM cross section = / {CCS=$6}; END {printf "%15.4E \n",CCS}' ${names}_$i.out)
a=$(printf '%.2f\n' $a)
echo $a
In the *.out file the numbers are in scientific-notation. At the end the echo $a results me in a number 245.35 (or other numbers in my files). So, I was wondering how to change the out put number 245.35 in to 24535 so I can do a conditional in bash.
I tried to multiply and that obviously did not work. Can anyone help with this conversion?
You might do best to use something other than bash for your arithmetic -- call out to something with a bit more power. You might find the following links either inspiring or horrifying: http://blog.plover.com/prog/bash-expr.html ("Arithmetic expressions in shell scripts") and http://blog.plover.com/prog/spark.html ("Insane calculations in bash"); I'm afraid this is the sort of thing you're liable to end up with if you seriously try to do bash-based arithmetic. In particular, the to_rational function in the second of those articles includes some code for splitting up decimals using regular expressions, though he's doing something more complicated with them than it sounds like you do.
Per our extended conversation
a=$(awk '/average TM cross section = / {CCS=$6}; END {printf "%15d\n",CCS * 100}' ${names}_$i.out)
Now your output will be an integer.
Note that awk is well designed for processing large files and testing logic. It is likely that your all/most of your processing could be done in one awk process. If you're processing large amounts of data, the time savings can be significant.
I hope this helps.
as per the info provided by you , this is not related to any arithmetic operation.
treat it as string . find decimal point and remove it . that's what i understand
http://www.cyberciti.biz/faq/unix-linux-replace-string-words-in-many-files/
http://www.thegeekstuff.com/2010/07/bash-string-manipulation/
This question already has answers here:
What do ! and # mean when attached to numbers in VB6?
(3 answers)
Closed 9 years ago.
There is the following in some code I'm trying to figure out:
For I& = 1 To...
I'm not familiar with the & after a variable. What does that represent?
After some further research, it looks like the I& is being defined as a type LONG. Now my questions is why would they be doing this? Is it overkill or just legacy code?
The legacy BASIC Language had several ways of declaring variables. You could use data type suffixes ($, %, &, !, or #) to the variable name
x$ = "This is a string" ' $ defines a string
y% = 10 ' % defines an integer
y& = 150 ' & defines a long integer
y! = 3.14 ' ! defines a single
y# = 12.24 ' # defines a double
Legacy. Old-school (pre-.NET) Visual Basic used variable name suffixes in lieu of (optionally) variable types.
You are right - putting an ampersand & after a number or a variable means that it is of a Long 32-bits type.
So the answer is, how many iterations does the loop need - is it possible, that it would exceed 16 bits integer?
With no data type identifier after the i, it is implied to be of the native Integer (the default). Therefore this i is expressed as an Integer, which makes it a 16-bit i.
So, I'd say it is the original developer had this habit of explicitly stating the variable type with &, and whether it was really needed there depends on the number of iterations that the For..Next loop has to support in this case.
Most likely it is either old code ported forward to VB6 from QBasic, etc. or else just a bad habit some individual programmer had from that era. While kind of sloppy its meaning should be obvious to a VB6 programmer, since it can be used with numeric literals in many cases too:
MsgBox &HFFFF
MsgBox &HFFFF&
These display different values because they are different values.
Yes it means Long but it often reflects somebody who fails to set the IDE option to auto-include Option Explicit in new modules when created.
Using symbolic notation (Integer - %, Long - &, Single - !, Double - #, String - $) is an excellent method for variable declaration and usage. It’s usage is consistent with "structured programming" and it’s a good alternative to Hungarian notation.
With Hungarian notation, one might define a string filename as “strFileName”, where the variable name is preceded by a lower case abbreviation of the variable type. The is contrary to another good programming practice of making all global variables begin with an upper case first letter and all local variables begin with a lower case. This helps the reader of your code instantly know the scope of a variable. Ie. firstName$ is a local string variable; LastName$ is a global string variable.
As with all programming, it’s good to follow conventions, ..whether you define your own conventions or somebody else’s conventions or industry conventions. Following no conventions is a very poor programming practice. Using symbolic notation is one type of naming convention.