I am writing a bash script to take in some numbers output from a tool, they are numbers, truncated with the "e" notation (I can't remember the correct name for this).
So, it spits out numbers like 1.3684528004e+05 and 1.2815670938e+04.
How can I convert these into their full original number in my bash script; I have the usual binaries at my disposal such as bc and dc etc, this box also has php-cli installed and perl (Ubuntu 10.x).
Many thanks for reading.
You can use printf built-in:
$ x=1.3684528004e+05
$ printf "%f\n" $x
136845.280040
$ y=1.2815670938e+04
$ printf "%f\n" $y
12815.670938
Related
I am trying to add two 32 bit binary numbers. One of them is a constant (address_range_in_binary) , and another one is an element of an array (IPinEachSubnet[$val])
I am trying to follow the instructions here, but I could not figure out how to get it done using variables. I have been trying to use different combinations of the below, but none of them seems to work. It is probably a simple syntax issue. Any help would be appreciated. The following is printing some negative random values.
For example, if the values are as following:
$address_range_in_binary=00001010001101110000101001000000
$IPinEachSubnet[$val]=00000000000000000000000000010000
echo "ibase=2;obase=2;$((address_range_in_binary+IPinEachSubnet[$val]))" | bc -l
The output of this is -1011101110111111110
bash only solution
y=2#00001010001101110000101001000000
t=2#00000000000000000000000000010000
oct=$(printf '%o' $(( y + t ))) # no bin format in printf
o2b=({0..1}{0..1}{0..1})
r=''
for (( i=0; i<${#oct}; i++ ))
do
r+=${o2b[${oct:$i:1}]}
done
echo $r
the conversion from oct to bin is inspired in Bash shell Decimal to Binary conversion
Let's define your variables (I will use shorter names):
$ y=00001010001101110000101001000000
$ t=00000000000000000000000000010000
Now, let's run the command in question:
$ echo "ibase=2;obase=2;$((y+t))" | bc -l
-1011101110111111111
The above produces that incorrect result that you observed.
To get the correct result:
$ echo "ibase=2;obase=2; $y+$t" | bc -l
1010001101110000101001010000
Discussion
The command $((y+t)) tells bash to do the addition assuming that the numbers are base-10. The result of bash's addition is passed to bc. This is not what you want: You want bc to do the addition.
Using an array
$ y=00001010001101110000101001000000
$ arr=(00000000000000000000000000010000)
$ echo "ibase=2;obase=2; $y+${arr[0]}" | bc -l
1010001101110000101001010000
I have a collection of stored procedures (SPs) being called in some C# code. I simply want to find which lines in which C# files are using these SPs.
I have installed git-bash, and am working in a Win10 environment.
No matter what I try, grep either spits out nothing, or spits out the entire contents of every file that has a matching record. I simply want the filename
and the line number where SP regex matches.
In a terminal, here is what I have done:
procs=( $(cat procs.txt) ) #load the procs into an array
echo ${#procs[#]} #echo the size to make sure each proc got read in separately
output: 235
files=( $(find . -type f -iregex '.*\.cs') ) #load the file paths into an array,
#this similarly returns a filled out array
output: #over 1000
I have also tried this variant which removes the initial './' in the path, thinking that the relative pathing was causing an issue
files=( $(find . -type f -iregex '.*\.cs' | sed 's/..//') )
The rest is a simple nested for loop:
for i in ${procs[#]}
do
for j in ${files[#]}
do
grep -nie "$i" "$j"
done
done
I have tried many other variants of this basic idea, like redirecting the grep output to a text file, adding and subtracting flags,
quoting and unquoting the variables, and the like.
I also tried this approach, but was similarly unsuccessful
for i in ${procs[#]}
do
grep -r --include='*.cs' -F $i
#and i also tried
grep -F $i *
done
at this point I am thinking there is something I don't understand about how git-bash works in a windows environment, because it seems like it should have worked by now.
Thanks for your help.
EDIT:
So after hours of heart-ache I finally got it to work with this:
for i in "${!procs[#]}"
do
for j in "${!files[#]}"
do
egrep -nH $(echo "${procs[$i]}") $(echo "${files[$j]}")
done
done
I looked it up, and my git-bash version is gnu-bash 4.4.12(1) x86_64-pc-msys
I'm still not sure why git-bash needs such weird quoting and echoing just to get everything to run properly. On debian linux it worked with just a simple
for i in ${procs[#]}
do
for j in ${files[#]}
do
grep $i $j
done
done
Running this version of bash: GNU bash, version 4.4.12(1)-release (x86_64-pc-linux-gnu)
If anyone can tell me why git-bash behaves so oddly, I would still love to know the answer.
What might be the most concise way in bash to convert a number into a bitfield character string like 1101?
In effect I am trying to do the opposite of
echo $[2#1101]
Why: I need to send a parameter to a program that takes bitfields in the form of a full string like "0011010110" but often only need to enable one or few bits as in:
SUPPRESSbits=$[1<<16] runscript.sh # OR
SUPPRESSbits=$[1<<3 + 1<<9] runscript.sh # much more readable when I know what bits 3 and 9 toggle in the program
Then runscript.sh then sees in its env a SUPPRESSbits=65536 rather than SUPPRESSbits="1000000000000000" and ends in parse error.
The easy way:
$ dc <<<2o123p
1111011
$ bc <<<'obase=2; 123'
1111011
I doubt about bash but you always can use perl:
a=123; b=$(perl -e 'printf "%b", "'$a'"'); echo $b
1111011
The following Ksh script gives me "No such file or directory" error message on Red Hat Linux system. Does anyone has a solution?
#!/usr/bin/ksh
for f in `cat files.dat`
do
wc $f
done
For example, files.dat has 3 lines of data and each line is a file in the current directory where the script is running from.
a.c
a.h
b.c
Note, the same for loop generated the same error message if running from command line too.
It works on Solaris/Mac box but not on Red Hat system.
Thanks.
Instead of for ... cat, you should use
while read -r f
do
wc "$f"
done < files.dat
And you should use $() instead of backticks when you do need to do command substitution.
But your problem is probably that the files a.c, etc., are not there, have different names, invisible characters in their names, or the line endings in files.dat are CR/LF (DOS/Windows-style) instead of \n (LF only - Unix-style) or there are odd characters in the file otherwise.
You should properly quote your arguments, in other words, use "$f", not $f.
About cat - it's mostly documented in here: http://porkmail.org/era/unix/award.html
What is probably better suited is xargs -a thatfile wc.
I need a way to replace HTML ASCII codes like ! with their correct character in bash.
Is there a utility I could run my output through to do this, or something along those lines?
$ echo '!' | recode html/..
!
$ echo '<∞>' | recode html/..
<∞>
I don't know of an easy way, here is what I suppose I would do...
You might be able to script a browser into reading the file in and then saving it as text. If lynx supports html character entities then it might be worth looking in to. If that doesn't work out...
The general solution to something like this is done with sed. You need a "higher order" edit for this, as you would first start with an entity table and then you would edit that table into an edit script itself with a multiple-step procedure. Something like:
. . .
s/‡/‡/g<br />
s/”/”/g<br />
. . .
Then, encapsulate this as html, read it in to a browser, and save it as text in the character set you are targeting. If you get it to produce lines like:
s/</</g
then you win. A bash script that calls sed or ex can be driven by the substitute commands in the file.
Here is my solution with the standard Linux toolbox.
$ foo="This is a line feed
And e acute:é with a grinning face 😀."
$ echo "$foo"
This is a line feed
And e acute:é with a grinning face 😀.
$ eval "$(printf '%s' "$foo" | sed 's/^/printf "/;s/�*\([0-9]*\);/\$( [ \1 -lt 128 ] \&\& printf "\\\\$( printf \"%.3o\\201\" \1)" || \$(which printf) \\\\U\$( printf \"%.8x\" \1) )/g;s/$/\\n"/')" | sed "s/$(printf '\201')//g"
This is a line feed
And e acute:é with a grinning face 😀.
You see that it works for all kinds of escapes, even Line Feed, e acute (é) which is a 2 byte UTF-8 and even the new emoticons which are in the extended plane (4 bytes unicode).
This command works ALSO with dash which is a trimmed down shell (default shell on Ubuntu) and is also compatible with bash and shells like ash used by the Synology.
If you don't mind sticking with bash and dropping the compatibility, you can make is much simpler.
Bits used should be in any decent Linux box (or OS X?)
- which
- printf (GNU and builtin)
- GNU sed
- eval (shell builtin)
The bash only version don't need which nor the GNU printf.