read file to variable shell script - shell
I have a text file that has coordinates in it. The text file looks like this:
52.56747345
-1.30973574
What I would like to do within raspberry pi shell script is to read the file and then create two variables. One being latitude which is the first value in the text file and the second being longitude which is the second value. Im not sure how to do this so could i please get some help.
This works ok:
$ { read lat;read lon; } <file
First line is stored in var $lat, second line in var $lon
lat=$(head -1 file.txt)
echo $lat
52.56747345
lon=$(tail -1 file.txt)
echo $lon
-1.30973574
1 You have a data file:
cat data.txt
result:
52.56747345
-1.30973574
42.56747345
-2.30973574
32.56747345
-3.30973574
2 Write a shell script:
cat tool.sh
result:
#!/bin/bash
awk '{if(NR%2==0) print $0;else printf $0" "}' data.txt | while read latitude longitude
do
echo "latitude:${latitude} longitude:${longitude}"
done
3 Execute this shell script.Output is like this:
sh tool.sh
result:
latitude:52.56747345 longitude:-1.30973574
latitude:42.56747345 longitude:-2.30973574
latitude:32.56747345 longitude:-3.30973574
Related
how to print An array on the same line
I am reading a file which contains the following config as an array: $ cat ./FILENAME *.one.dev.arr.name.com *.one.dev.brr.name.com *.one.dev.sic.name.com *.one.dev.sid.name.com *.one.dev.xyz.name.com *.one.dev.yza.name.com The array is read IFS='$\n' read -d '' -r -a FILENAME < ./FILENAME I need the format to be of the following format: '{*.one.dev.arr.name.com,*.one.dev.brr.name.com,*.one.dev.sic.name.com,*.one.dev.sid.name.com,*.one.dev.xyz.name.com,*.one.dev.yza.name.com}' I've tried using printf, the tricky part is the wildcard(*) at the start of the name.
Just output what you want to output - { and } and join lines with ,: echo "{$(paste -sd, FILENAME)}" If you want with an array, you can just: echo "{$(IFS=, ; echo "${array[*]}")}"
Trying to create a script that counts the length of a all the reads in a fastq file but getting no return
I am trying go count the length of each read in a fastq file from illumina sequencing and outputting this to a tsv or any sort of file so I can then later also look at this and count the number of reads per file. So I need to cycle down the file and eactract each line that has a read on it (every 4th line) then get its length and store this as an output num=2 for file in *.fastq do echo "counting $file" function file_length(){ wc -l $file | awk '{print$FNR}' } for line in $file_length do awk 'NR==$num' $file | chrlen > ${file}read_length.tsv num=$((num + 4)) done done Currently all I get the counting $file and no other output but also no errors
Your script contains a lot of errors in both syntax and algorithm. Please try shellcheck to see what is the problem. The most issue will be the $file_length part. You may want to call a function file_length() here but it is just an undefined variable which is evaluated as null in the for loop. If you just want to count the length of the 4th line of *.fastq files, please try something like: for file in *.fastq; do awk 'NR==4 {print length}' "$file" > "${file}_length.tsv" done Or if you want to put the results together in a single tsv file, try: tsvfile="read_lenth.tsv" for file in *.fastq; do echo -n -e "$file\t" >> "$tsvfile" awk 'NR==4 {print length}' "$file" >> "$tsvfile" done Hope this helps.
How to process text in shell script and assign to varibles?
sample.text file . var1=https://www.process.com var2=https://www.hp.com var3=http://www.google.com : : varz=https://www.sample.com i am sending this sample txt as input to one script. that script should split the lines and assign the variables to diff parameters like $varn= $var1,....$varn $value=https://www.sample.com ( all the variables value) i am trying with below script not working . #!/bin/bash for $1 in ( cat sample.txt ); do echo $1 #var1=https://www.process.com sed 's/=/\n/g' $1 | awk 'NR%2==0' done main aim is to assign all urls to one variable and vars to one variable and process the file
If sample.text already contains your variable assignments for you, e.g. var1=https://www.process.com var2=https://www.hp.com var3=http://www.google.com and you want access to var1, var2, ... varn, then you are making things difficult on yourself by trying to read and parse sample.text instead of simply sourcing it with '.' or source. For example, given sample.text containing: $ cat sample.text var1=https://www.process.com var2=https://www.hp.com var3=http://www.google.com varz=https://www.sample.com You need only source the file to access the variable, e.g. #!/bin/bash . sample.text || { printf "error sourcing sample.text\n" exit 1 } printf "%s\n" $var{1..3} $varz Example Use/Output $ bash source_sample.sh https://www.process.com https://www.hp.com http://www.google.com https://www.sample.com Look things over and let me know if you have further questions.
appending text to specific line in file bash
So I have a file that contains some lines of text separated by ','. I want to create a script that counts how much parts a line has and if the line contains 16 parts i want to add a new one. So far its working great. The only thing that is not working is appending the ',' at the end. See my example below: Original file: a,a,a,a,a,a,a,a,a,a,a,a,a,a,a,a b,b,b,b,b,b a,a,a,a,a,a,a,a,a,a,a,a,a,a,a,a,a b,b,b,b,b,b a,a,a,a,a,a,a,a,a,a,a,a,a,a,a,a Expected result: a,a,a,a,a,a,a,a,a,a,a,a,a,a,a,a,xx b,b,b,b,b,b a,a,a,a,a,a,a,a,a,a,a,a,a,a,a,a,a b,b,b,b,b,b a,a,a,a,a,a,a,a,a,a,a,a,a,a,a,a,xx This is my code: while read p; do if [[ $p == "HEA"* ]] then IFS=',' read -ra ADDR <<< "$p" echo ${#ADDR[#]} arrayCount=${#ADDR[#]} if [ "${arrayCount}" -eq 16 ]; then sed -i "/$p/ s/\$/,xx/g" $f fi fi done <$f Result: a,a,a,a,a,a,a,a,a,a,a,a,a,a,a,a ,xx b,b,b,b,b,b a,a,a,a,a,a,a,a,a,a,a,a,a,a,a,a,a b,b,b,b,b,b a,a,a,a,a,a,a,a,a,a,a,a,a,a,a,a ,xx What im doing wrong? I'm sure its something small but i cant find it..
It can be done using awk: awk -F, 'NF==16{$0 = $0 FS "xx"} 1' file a,a,a,a,a,a,a,a,a,a,a,a,a,a,a,a,xx b,b,b,b,b,b a,a,a,a,a,a,a,a,a,a,a,a,a,a,a,a,a b,b,b,b,b,b a,a,a,a,a,a,a,a,a,a,a,a,a,a,a,a,xx -F, sets input field separator as comma NF==16 is the condition that says execute block inside { and } if # of fields is 16 $0 = $0 FS "xx" appends xx at end of line 1 is the default awk action that means print the output
For using sed answer should be in the following: Use ${line_number} s/..../..../ format - to target a specific line, you need to find out the line number first. Use the special char & to denote the matched string The sed statement should look like the following: sed -i "${line_number}s/.*/&xx/" I would prefer to leave it to you to play around with it but if you would prefer i can give you a full working sample.
Mounted volumes & bash in OSX
I'm working on a disk space monitor script in OSX and am struggling to first generate a list of volumes. I need this list to be generated dynamically as it changes over time; having this work properly would also make the script portable. I'm using the following script snippet: #!/bin/bash PATH=/bin:/usr/bin:/sbin:/usr/sbin export PATH FS=$(df -l | grep -v Mounted| awk ' { print $6 } ') while IFS= read -r line do echo $line done < "$FS" Which generates: test.sh: line 9: / /Volumes/One-TB /Volumes/pfile-archive-offsite-three-CLONE /Volumes/ERDF-Files-Offsite-Backup /Volumes/ESXF-Files-Offsite-Backup /Volumes/ACON-Files-Offsite-Backup /Volumes/LRDF-Files-Offsite-Backup /Volumes/EPLK-Files-Offsite-Backup: No such file or directory I need the script to generate output like this: / /Volumes/One-TB /Volumes/pfile-archive-offsite-three-CLONE /Volumes/ERDF-Files-Offsite-Backup /Volumes/ESXF-Files-Offsite-Backup /Volumes/ACON-Files-Offsite-Backup /Volumes/LRDF-Files-Offsite-Backup /Volumes/EPLK-Files-Offsite-Backup Ideas, suggestions? Alternate or better methods of generating a list of mounted volumes are also welcome. Thanks! Dan
< is for reading from a file. You are not reading from a file but from a bash variable. So try using <<< instead of < on the last line. Alternatively, you don't need to store the results in a variable, then read from the variable; you can directly read from the output of the pipeline, like this (I have created a function for neatness): get_data() { df -l | grep -v Mounted| awk ' { print $6 } ' } get_data | while IFS= read -r line do echo $line done Finally, the loop doesn't do anything useful, so you can just get rid of it: df -l | grep -v Mounted| awk ' { print $6 } '