How do I read a line and set it as a intager withouth its brackets - bash

I have written a script which reads a line in a text and gives the text as output. The text is only a number but it is in brackets. Because of that I can't set it as a intager. How do I make scripot to read the file without brackets or make it delete the brackets afterwards?
My script so far:
#!/bin/bash
P=`cat /sys/devices/platform/applesmc.768/light`
echo "$P"

Replace
P=`cat /sys/devices/platform/applesmc.768/light`
by
P=`cat /sys/devices/platform/applesmc.768/light | tr -d '()'`
to delete all ( and ) or shorter
P=`tr -d '()' < /sys/devices/platform/applesmc.768/light`

Related

convert a file content using shell script

Hello everyone I'm a beginner in shell coding. In daily basis I need to convert a file's data to another format, I usually do it manually with Text Editor. But I often do mistakes. So I decided to code an easy script who can do the work for me.
The file's content like this
/release201209
a1,a2,"a3",a4,a5
b1,b2,"b3",b4,b5
c1,c2,"c3",c4,c5
to this:
a2>a3
b2>b3
c2>c3
The script should ignore the first line and print the second and third values separated by '>'
I'm half way there, and here is my code
#!/bin/bash
#while Loops
i=1
while IFS=\" read t1 t2 t3
do
test $i -eq 1 && ((i=i+1)) && continue
echo $t1|cut -d\, -f2 | { tr -d '\n'; echo \>$t2; }
done < $1
The problem in my code is that the last line isnt printed unless the file finishes with an empty line \n
And I want the echo to be printed inside a new CSV file(I tried to set the standard output to my new file but only the last echo is printed there).
Can someone please help me out? Thanks in advance.
Rather than treating the double quotes as a field separator, it seems cleaner to just delete them (assuming that is valid). Eg:
$ < input tr -d '"' | awk 'NR>1{print $2,$3}' FS=, OFS=\>
a2>a3
b2>b3
c2>c3
If you cannot just strip the quotes as in your sample input but those quotes are escaping commas, you could hack together a solution but you would be better off using a proper CSV parsing tool. (eg perl's Text::CSV)
Here's a simple pipeline that will do the trick:
sed '1d' data.txt | cut -d, -f2-3 | tr -d '"' | tr ',' '>'
Here, we're just removing the first line (as desired), selecting fields 2 & 3 (based on a comma field separator), removing the double quotes and mapping the remaining , to >.
Use this Perl one-liner:
perl -F',' -lane 'next if $. == 1; print join ">", map { tr/"//d; $_ } #F[1,2]' in_file
The Perl one-liner uses these command line flags:
-e : Tells Perl to look for code in-line, instead of in a file.
-n : Loop over the input one line at a time, assigning it to $_ by default.
-l : Strip the input line separator ("\n" on *NIX by default) before executing the code in-line, and append it when printing.
-a : Split $_ into array #F on whitespace or on the regex specified in -F option.
-F',' : Split into #F on comma, rather than on whitespace.
SEE ALSO:
perldoc perlrun: how to execute the Perl interpreter: command line switches

\t Tab is lost in bash script

I'm reading a text file line by line in a bash script. The text file I'm reading is a tab separated csv - however when I try to cut the read line, it does not work, it seems like the \t is converted to a blank space somewhere
Below code is not what I am doing finally - I have not yet implemented the actual workload to the code, until the data can be read reliably.
for (( currlineno=2 ; $currlineno <= $maxlines ; currlineno++ )); do
currline=$(sed -n "$currlineno"p "$IMPORT_TABLE".csv )
echo $currline |cut -f2
done
now when I change the two lines like below it works
for (( currlineno=2 ; $currlineno <= $maxlines ; currlineno++ )); do
currline=$(sed -n "$currlineno"p "$IMPORT_TABLE".csv |tr '\t' ';')
echo $currline |cut -f2 -d ';'
done
but I cannot do it like that as my text file also contains ';' ',' and '.' in the fields. Tab is the only acceptable option for me, as my fields will never contain it.
That's because you don't double quote your variable.
tabbed=$'a\tb'
echo $tabbed : "$tabbed"
When bash sees the variable outside of quotes, it applies word splitting on its contents, and echo just outputs its parameters separated by spaces. Double quotes make the value one parameter, even if it contains whitespace, newlines, etc.

Creating a comma separated array in bash

I have a file with contents as the following .
line1
line2
line3
I need to create array like this
('line2','line2','line3')
How can I do that ?
The should help you :
sed -r "s/^|$/'/g" file | echo "(`paste -d, -s`)"
I'm using sed to add ' in start & end of each line, then concatenate the content using paste and enclosing it with parenthesis using echo.
You can use the following:
while read line; do printf "'$line',"; done < file | sed 's/^/(/;s/,$/)\n/'
The while loop gets the content and enclose it between brackets.
s/^/(/ is adding a ( at the beginning of the string.
s/,$/)\n/ is replacing the last , by a ) and a cariage return.

Extracting value from a flat file using shell script

I'm trying to extract the value present between brackets in the last row of a flat file e.g. " last_line (4) ". This is the last line and I want to extract 4 and store it in a variable. I have extracted the last row using tail command but now I am unable to extract the value between the brackets.
Kindly help.
Using awk:
$ cat input
first line
2nd line
last line (4) with some data
$ awk -F'[()]' 'END{print $2}' input
4
l=$(tail -n1 filename); tmp=${l##*(}; tmp=${tmp%)*}; printf "tmp: %s\n" $tmp
Output
tmp: 4
Written in script format, you are using substring removal to trim everything up to the first ( and everything after the last ) from the last line, leaving only 4:
l=$(tail -n1 filename) ## get the last line
tmp=${l##*(} ## trim to ( from left
tmp=${tmp%)*} ## trim to ) from right
printf "tmp: %s\n" $tmp
sed:
sed -n '${s/.*(//;s/).*//;p}' file
U can use this script.
In this script i saved the last line in a tmp file and at last removed it.
the number between the brackets() is in variable WORD
#!/bin/ksh
if test "${DEBUG}" = "Y"
then
set -vx
fi
tail -1 input>>tmp
WORD=`sed -n 's/.*(//;s/).*//;p' tmp`
echo $WORD
rm tmp

gnuplot for cycle and spaces in filename

I have small script in bash, which is generating graphs via gnuplot.
Everything works fine until names of input files contain space(s).
Here's what i've got:
INPUTFILES=("data1.txt" "data2 with spaces.txt" "data3.txt")
...
#MAXROWS is set earlier, not relevant.
for LINE in $( seq 0 $(( MAXROWS - 1 )) );do
gnuplot << EOF
reset
set terminal png
set output "out/graf_${LINE}.png"
filenames="${INPUTFILES[#]}"
set multiplot
plot for [file in filenames] file every ::0::${LINE} using 1:2 with line title "graf_${LINE}"
unset multiplot
EOF
done
This code works, but only without spaces in names of input files.
In the example gnuplot evaluate this:
1 iteration: file=data1.txt - CORRECT
2 iteration: file=data2 - INCORRECT
3 iteration: file=with - INCORRECT
4 iteration: file=spaces.txt - INCORRECT
The quick answer is that you can't do exactly what you want to do. Gnuplot splits the string in an iteration on spaces and there's no way around that (AFIK). Depending on what you want, there may be a "Work-around". You can write a (recursive) function in gnuplot to replace a character string with another --
#S,C & R stand for STRING, CHARS and REPLACEMENT to help this be a little more legible.
replace(S,C,R)=(strstrt(S,C)) ? \
replace( S[:strstrt(S,C)-1].R.S[strstrt(S,C)+strlen(C):] ,C,R) : S
Bonus points to anyone who can figure out how to do this without recursion...
Then your (bash) loop looks something like:
INPUTFILES_BEFORE=("data1.txt" "data2 with spaces.txt" "data3.txt")
INPUTFILES=()
#C style loop to avoid changing IFS -- Sorry SO doesn't like the #...
#This loop pre-processes files and changes spaces to '#_#'
for (( i=0; i < ${#INPUTFILES_BEFORE[#]}; i++)); do
FILE=${INPUTFILES_BEFORE[${i}]}
INPUTFILES+=( "`echo ${FILE} | sed -e 's/ /#_#/g'`" ) #replace ' ' with '#_#'
done
which preprocesses your input files to add '#_#' to the filenames which have spaces in them... Finally, the "complete" script:
...
INPUTFILES_BEFORE=("data1.txt" "data2 with spaces.txt" "data3.txt")
INPUTFILES=()
for (( i=0; i < ${#INPUTFILES_BEFORE[#]}; i++)); do
FILE=${INPUTFILES_BEFORE[${i}]}
INPUTFILES+=( "`echo ${FILE} | sed -e 's/ /#_#/g'`" ) #replace ' ' with '#_#'
done
for LINE in $( seq 0 $(( MAXROWS - 1 )) );do
gnuplot <<EOF
filenames="${INPUTFILES[#]}"
replace(S,C,R)=(strstrt(S,C)) ? \
replace( S[:strstrt(S,C)-1].R.S[strstrt(S,C)+strlen(C):] , C ,R) : S
#replace '#_#' with ' ' in filenames.
plot for [file in filenames] replace(file,'#_#',' ') every ::0::${LINE} using 1:2 with line title "graf_${LINE}"
EOF
done
However, I think the take-away here is that you shouldn't use spaces in filenames ;)
Escape the spaces:
"data2\ with\ spaces.txt"
EDIT
It seems that even with escape sequences, as you have mentioned, the bash for will always parse the input on the spaces.
Can you convert your script to work in a while loop fashion:
http://ubuntuforums.org/showthread.php?t=83424
This also may be a solution, but it's new to me and I'm still playing with it to understand exactly what it's doing:
http://www.cyberciti.biz/tips/handling-filenames-with-spaces-in-bash.html

Resources