How do I update select variables in another file in shell script? - bash

I apologize in advance if the solution to my problem is very straightforward and obvious, as I'm very new to shell scripting. For a program I'm working on, I need to update the contents of another file that was previously created. For example, say this is one of the files to be updated, student_1.item:
student_1 Sally Johnson
3 9
Mr. Ortiz
I am to create another bash file that asks for the name of the file to be updated, and then prompts the user for the following one at a time:
Student Name:
Student Number:
Grade:
Age:
Teacher:
The user is able to leave any of the above blank, and whatever isn't filled in isn't changed in the original student_1.item file. Whatever is filled out, however, should be changed and updated to whatever the user put in.
I believe that I'd need to understand the concept of environment variables, but I'm a little stuck. Would I first need to read the lines into variables from student_1.item and then export any changed variables back into student_1.item?
Again, my apologies if this is a silly question. Any help is appreciated!

Sounds a bit complicated, but here is untested solution: it reads a file line by line and assigns these lines to 3 variables. After that, these variables are parsed to get proper values of student.
# line number
n=0
# read file line by line
while read line; do
# assign line to a variable
case $n:
0) firstline=$line
;;
1) secondline=$line
;;
2) thirdline=$line
;;
esac
# increment line number
n=$((n+1))
done < student_1.item
# parse each line variable
student_number=$(echo $firstline | cut -d' ' -f1)
student_name=$(echo $firstline | cut -d' ' -f2,3)
student_grade=$(echo $secondline | cut -d' ' -f1)
student_age=$(echo $secondline | cut -d' ' -f2)
student_teacher=$thirdline

Related

Input from command line with incorrect result

I am trying to read a file that was outputted with table format. I am able to read the file that has Volume description, snapshotId and TimeStarted list from an aws region. I am asking user to input a volume name and output the snapshotId for the volume entered. The list contains volumes0 through volumes30.
The issue is, when user enters Volume0, it outputs the snapshotId correctly but if the user enters Volume20, it will only output |. My guess is because the original file being read is in table format, it is doing this. Initially I though I could put in a condition, where if user enters Volume0, print snapshotId else if user enters Volume20 then print snapshotid.
I am looking for a better way to do this. How can I ignore table format when reading the file, should I convert it to text format? How? Or how can I ignore any format when reading? Here is my bash script:
readoutput() {
echo "Hello, please tell me what Volume you are searching for..(Volume?):"
read volSearch
echo "Searching for newest SnapshotIds from /Users/User/Downloads/GetSnapId for:" $volSearch
sleep 5
input="/Users/User/Downloads/GetSnapId"
if x=$(grep -m 1 "$volSearch" "$input")
then
echo "$x"
else
echo "$volSearch not found..ending search"
fi
extractSnap=$(echo "$x" | grep "snap-" | awk '{print $7}')
echo $extractSnap
}
readoutput
The issue that awk is not too smart, and tries to determine table separator automatically. In first row with Volume0 you have space before vertical line, so it things that both of space and vertical line are separators, but in next row, you have no space, so it takes wrong column.
Try next:
extractSnap=$(echo "$x" | cut -d'|' -f3 | cut -d'-' -f 2)

How to loop a variable range in cut command

I have a file with 2 columns, and i want to use the values from the second column to set the range in the cut command to select a range of characters from another file. The range i desire is the character in the position of the value in the second column plus the next 10 characters. I will give an example in a while.
My files are something like that:
File with 2 columns and no blank lines between lines (file1.txt):
NAME1 10
NAME2 25
NAME3 48
NAME4 66
File that i want to extract the variable range of characters(just one very long line with no spaces and no bold font) (file2.txt):
GATCGAGCGGGATTCTTTTTTTTTAGGCGAGTCAGCTAGCATCAGCTACGAGAGGCGAGGGCGGGCTATCACGACTACGACTACGACTACAGCATCAGCATCAGCGCACTAGAGCGAGGCTAGCTAGCTACGACTACGATCAGCATCGCACATCGACTACGATCAGCATCAGCTACGCATCGAAGAGAGAGC
...or, more literally (for copy/paste to test):
GATCGAGCGGGATTCTTTTTTTTTAGGCGAGTCAGCTAGCATCAGCTACGAGAGGCGAGGGCGGGCTATCACGACTACGACTACGACTACAGCATCAGCATCAGCGCACTAGAGCGAGGCTAGCTAGCTACGACTACGATCAGCATCGCACATCGACTACGATCAGCATCAGCTACGCATCGAAGAGAGAGC
Desired resulting file, one sequence per line (result.txt):
GATTCTTTTT
GGCGAGTCAG
CGAGAGGCGA
TATCACGACT
The resulting file would have the characters from 10-20, 25-35, 48-58 and 66-76, each range in a new line. So, it would always keep the range of 10, but in different start points and those start points are set by the values in the second column from the first file.
I tried the command:
for i in $(awk '{print $2}' file1.txt);
do
p1=$i;
p2=`expr "$1" + 10`
cut -c$p1-$2 file2.txt > result.txt;
done
I don't get any output or error message.
I also tried:
while read line; do
set $line
p2=`expr "$2" + 10`
cut -c$2-$p2 file2.txt > result.txt;
done <file1.txt
This last command gives me an error message:
cut: invalid range with no endpoint: -
Try 'cut --help' for more information.
expr: non-integer argument
There's no need for cut here; dd can do the job of indexing into a file, and reading only the number of bytes you want. (Note that status=none is a GNUism; you may need to leave it out on other platforms and redirect stderr otherwise if you want to suppress informational logging).
while read -r name index _; do
dd if=file2.txt bs=1 skip="$index" count=10 status=none
printf '\n'
done <file1.txt >result.txt
This approach avoids excessive memory requirements (as present when reading the whole of file2 -- assuming it's large), and has bounded performance requirements (overhead is equal to starting one copy of dd per sequence to extract).
Using awk
$ awk 'FNR==NR{a=$0; next} {print substr(a,$2+1,10)}' file2 file1
GATTCTTTTT
GGCGAGTCAG
CGAGAGGCGA
TATCACGACT
If file2.txt is not too large, then you can read it in memory,
and use Bash sub-strings to extract the desired ranges:
data=$(<file2.txt)
while read -r name index _; do
echo "${data:$index:10}"
done <file1.txt >result.txt
This will be much more efficient than running cut or another process for every single range definition.
(Thanks to #CharlesDuffy for the tip to read data without a useless cat, and the while loop.)
One way to solve it:
#!/bin/bash
while read line; do
pos=$(echo "$line" | cut -f2 -d' ')
x=$(head -c $(( $pos + 10 )) file2.txt | tail -c 10)
echo "$x"
done < file1.txt > result.txt
It's not the solution an experienced bash hacker would use, but it is very good for someone who is new to bash. It uses tools that are very versatile, although somewhat bad if you need high performance. Shell scripting is commonly used by people who rarely shell scripts, but knows a few commands and just wants to get the job done. That's why I'm including this solution, even if the other answers are superior for more experienced people.
The first line is pretty easy. It just extracts the numbers from file1.txt. The second line uses the very nice tools head and tail. Usually, they are used with lines instead of characters. Nevertheless, I print the first pos + 10 characters with head. The result is piped into tail which prints the last 10 characters.
Thanks to #CharlesDuffy for improvements.

How to assign line number to a variable in a while loop

I have a file contains some lines. Now I want to read the lines and get the line numbers. As below:
while read line
do
string=$line
number=`awk '{print NR}'` # This way is not right, gets all the line numbers.
done
Here is my scenario: I have one file, contains some lines, such as below:
2015Y7M3D0H0Mi44S7941
2015Y7M3D22H24Mi3S7927
2015Y7M3D21H28Mi21S5001
I want to read each line of this file, print out the last characters starts with "S" and the line number of it. it shoud looks like:
1 S7941
2 S7927
3 S5001
So, what should I properly do to get this?
Thanks.
Can anyone help me out ???
The UNIX shell is simply an environment from which to call tools and a language to sequence those calls. The UNIX general purpose text processing tool is awk so just use it:
$ awk '{sub(/.*S/,NR" S")}1' file
1 S7941
2 S7927
3 S5001
If you're going to be doing any text manipulation, get the book Effective Awk Programming, 4th Edition, by Arnold Robbins.
I just asked one of my friend, Found a simple way:
cat -n $file |while read line
do
number=echo $line | cut -d " " -f 1
echo $number
done
That means if we can not get line number from the file itself, we pass it with a line number.

Create bash script with menu of choices that come from the output of another script

forgive me if this is painfully simple but I'm not a programmer so it's hard for me to tell what's easy and what's hard.
I have a bash script that I use (that someone else wrote) for finding out internal customer data where I basically run "info customername" and it searches our internal customer database for all customer records matching that customer name and outputs a list with their account numbers (which all have the same prefix of 11111xxxxxxxx), in the form of "Sample Customer - 111119382818873".
We have another bash script where you enter "extrainfo 11111xxxxxxxx", we get the plaintext data from their account, which we use for many things that are important to us.
The missing feature is that "extrainfo" cannot search by name, only number. So I'd like to bridge that gap. Ideally, I'd enter "extrainfo customername" and it would run a search using "info customername", generate a list of results as a menu, allow me to choose which customer I meant, and then run the "extrainfo 11111xxxxxxxxx" command of that customer. If there is only one match, it would automatically run the extrainfo command properly.
Here's what I have that works but only for the first result that "info customername" generates:
#!/bin/bash
key=`/usr/local/bin/info $1 | grep 11111 | awk '{print $NF}'`
/usr/local/bin/extrainfo $key
It's the menu stuff I'm having a hard time figuring out. I hope this was clear but again, I'm pretty dumb with this stuff so I probably left something important out. Thanks.
This might work for you:
#!/bin/bash
# Set the prompt for the select command
PS3="Type a number or 'q' to quit: "
# Create a list of customer names and numbers (fill gaps with underscores)
keys=$(/usr/local/bin/info $1 | sed 's/ /_/g')
# Show a menu and ask for input.
select key in $keys; do
if [ -n "$key" ]; then
/usr/local/bin/extrainfo $(sed 's/.*_11111/11111/' <<<"$key")
fi
break
done
Basically, this script reads all the customer info, finds all lines with the customer number prefix, and loads it into search.txt. Then it displays the file with line numbers in front of it, waits for you to choose a line number, and then strips out the customer name and spaces in front of the customer id. Finally, it runs my other script with just the customer id. It's hacky but functional.
#!/bin/bash
/usr/local/bin/info $1 | grep 11111 > search.txt
cat -n search.txt
read num
key=`sed -n ${num}p search.txt | awk '{print $NF}'`
/usr/local/bin/extrainfo $key

Grep outputs multiple lines, need while loop

I have a script which uses grep to find lines in a text file (ics calendar to be specific)
My script finds a date match, then goes up and down a few lines to copy the summary and start time of the appointment into a separate variable. The problem I have is that I'm going to have multiple appointments at the same time, and I need to run through the whole process for each result in grep.
Example:
LINE=`grep -F -n 20130304T232200 /path/to/calendar.ics | cut -f1 d:`
And it outputs only the lines, such as
86 89
Then it goes on to capture my other variables, as such:
SUMMARYLINE=$(( $LINE + 5 ))
SUMMARY:`sed -n "$SUMMARYLINE"p /path/to/calendar.ics
my script runs fine with one output, but it obviously won't work with more than 1 and I need for it to. should I send the grep results into an array? a separate text file to read from? I'm sure I'll need a while loop in here somehow. Need some help please.
You can call grep from a loop quite easily:
while IFS=':' read -r LINE notused # avoids the use of cut
do
# First field is now in $LINE
# Further processing
done < <(grep -F -n 20130304T232200 /path/to/calendar.ics)
However, if the file is not too large then it might be easier to read the whole file into an array and more around that.
With your proposed solution, you are reading through the file several times. Using awk, you can do it in one pass:
awk -F: -v time=20130304T232200 '
$1 == "SUMMARY" {summary = substr($0,9)}
/^DTSTART/ {start = $2}
/^END:VEVENT/ && start == time {print summary}
' calendar.ics

Resources