Adding new lines to multiple files - bash

I need to add new lines with specific information to one or multiple files at the same time.
I tried to automate this task using the following script:
for i in /apps/data/FILE*
do
echo "nice weather 20190830 friday" >> $i
done
It does the job yet I wish I can automate it more and let the script ask me for to provide the file name and the line I want to add.
I expect the output to be like
enter file name : file01
enter line to add : IWISHIKNOW HOWTODOTHAT
Thank you everyone.

In order to read user input you can use
read user_input_file
read user_input_text
read user_input_line
You can print before the question as you like with echo -n:
echo -n "enter file name : "
read user_input_file
echo -n "enter line to add : "
read user_input_text
echo -n "enter line position : "
read user_input_line
In order to add line at the desired position you can "play" with head and tail
head -n $[$user_input_line - 1] $user_input_file > $new_file
echo $user_input_text >> $new_file
tail -n +$user_input_line $user_input_file >> $new_file

Requiring interactive input is horrible for automation. Make a command which accepts a message and a list of files to append to as command-line arguments instead.
#!/bin/sh
msg="$1"
shift
echo "$msg" | tee -a "$#"
Usage:
scriptname "today is a nice day" file1 file2 file3
The benefits for interactive use are obvious -- you get to use your shell's history mechanism and filename completion (usually bound to tab) but also it's much easier to build more complicated scripts on top of this one further on.
The design to put the message in the first command-line argument is baffling to newcomers, but allows for a very simple overall design where "the other arguments" (zero or more) are the files you want to manipulate. See how grep has this design, and sed, and many many other standard Unix commands.

You can use read statement to prompt for input,
read does make your script generic, but if you wish to automate it then you have to have an accompanying expect script to provide inputs to the read statement.
Instead you can take in arguments to the script which helps you in automation.. No prompting...
#!/usr/bin/env bash
[[ $# -ne 2 ]] && echo "print usage here" && exit 1
file=$1 && shift
con=$1
for i in `ls $file`
do
echo $con >> $i
done
To use:
./script.sh "<filename>" "<content>"
The quotes are important for the content so that the spaces in the content are considered to be part of it. For filenames use quotes so that the shell does not expand them before calling the script.
Example: ./script.sh "file*" "samdhaskdnf asdfjhasdf"

Related

How to recall a string in shell script

I made a script like this:
#! /usr/bin/bash
a=`ls ../wrfprd/wrfout_d0${i}* | cut -c22-25`
b=`ls ../wrfprd/wrfout_d0${i}* | cut -c27-28`
c=`ls ../wrfprd/wrfout_d0${i}* | cut -c30-31`
d=`ls ../wrfprd/wrfout_d0${i}* | cut -c33-34`
f=$a$b$c$d
echo $f
sed "s/.* startdate=.*/export startdate=${f}/g" ./post_process > post_process2
echo command works and gives 2008042118 that is what I want but in file post_process2 is like this export startdate= and can not recall variable f. I want to produce a line like export startdate=2008042118
First -- don't use ls here -- it's both expensive in terms of performance (compared to globbing, which is performed internal to the shell without starting any external programs), and doesn't guarantee useful output for the full range of possible filenames, making its use in this context inherently bug-prone. A better way to retrieve pieces from a filename, assuming a ksh-derived shell such as bash or zsh, would look like this:
#!/bin/bash
# this is an array, but we're only going to use the first element
file=( "../wrfprd/wrfout_d0${i}"* )
[[ -e $file ]] || { echo "No file found" >&2; exit 1; }
f=${file:22:4}${file:27:2}${file:30:2}${file:33:2}
Second, don't use sed to modify code -- doing so requires that your runtime user have permission to modify its own code, and moreover invites injection vulnerabilities. Just write your content out to a data file:
printf '%s\n' "$f" >startdate.txt
...and, in your second script, to read in the value from that file:
# if the shebang is #!/bin/bash
startdate=$(<startdate.txt)
# if the shebang is #!/bin/sh
startdate=$(cat startdate.txt)

What is wrong with my bash script?

What I have to to is edit a script given to me that will check if the user has write permission for a file named journal-file in the user's home directory. The script should take appropriate actions if journal-file exists and the user does not have write permission to the file.
Here is what I have written so far:
if [ -w $HOME/journal-file ]
then
file=$HOME/journal-file
date >> file
echo -n "Enter name of person or group: "
read name
echo "$name" >> $file
echo >> $file
cat >> $file
echo "--------------------------------" >> $file
echo >> $file
exit 1
else
echo "You do not have write permission."
exit 1
fi
When I run the script it prompt me to input the name of the person/group, but after I press enter nothing happens. It just sits there allowing me to continue inputting stuff and doesn't continue past that part. Why is it doing this?
The statement:
cat >>$file
will read from standard input and write to the file. That means it will wait until you indicate end of file with something like CTRL-D. It's really no different from just typing cat at a command line and seeing that nothing happens until you enter something and it waits until you indicate end of file.
If you're trying to append another file to the output file, you need to specify its name, such as cat $HOME/myfile.txt >>$file.
If you're trying to get a blank line in there, use echo rather than cat, such as echo >>$file.
You also have a couple of other problems, the first being:
date >> file
since that will try to create a file called file (in your working directory). Use $file instead.
The second is the exit code of 1 in the case where what you're trying to do has succeeded. That may not be a problem now but someone using this at a later date may wonder why it seems to indicate failure always.
To be honest, I'm not really a big fan of the if ... then return else ... construct. I prefer fail-fast with less indentation and better grouping of output redirection, such as:
file=${HOME}/journal-file
if [[ ! -w ${file} ]] ; then
echo "You do not have write permission."
exit 1
fi
echo -n "Enter name of person or group: "
read name
(
date
echo "$name"
echo
echo "--------------------------------"
echo
) >>${file}
I believe that's far more readable and maintainable.
It's this line
cat >> $file
cat is concatenating input from standard input (ie whatever you type) to $file
I think the part
cat >> $file
copies everything from stdin to the file. Maybe if you hid Ctrl+D (end of file) the script can continue.
1) You better check first whether the file exists or not:
[[ -e $HOME/journal-file ]] || \
{ echo "$HOME/journal-file does not exist"; exit 1 }
2) You gotta change "cat >> $file" for whatever you want to do with the file. This is the command that is blocking the execution of the script.

Parsing command output in bash to variables

I have a number of bash scripts, each doing its own thing merrily. Do note that while I program in other languages, I only use Bash to automate things, and am not very good at it.
I'm now trying to combine a number of them to create "meta" scripts, if you will, which use other scripts as steps. The problem is that I need to parse the output of each step to be able to pass a part of it as params to the next step.
An example:
stepA.sh
[...does stuff here...]
echo "Task complete successfuly"
echo "Files available at: $d1/$1"
echo "Logs available at: $d2/$1"
both the above are paths, such as /var/www/thisisatest and /var/log/thisisatest (note that files always start with /var/www and logs always start with /var/log ). I'm only interested in the files path.
steB.sh
[...does stuff here...]
echo "Creation of $d1 complete."
echo "Access with username $usr and password $pass"
all variables here are simple strings, that may contain special characters (no spaces)
What I'm trying to build is a script that runs stepA.sh, then stepB.sh and uses the output of each to do its own stuff. What I'm currently doing (both above scripts are symlinked to /usr/local/bin without the .sh part and made executable):
#!/bin/bash
stepA $1 | while read -r line; do
# Create the container, and grab the file location
# then pass it to then next pipe
if [[ "$line" == *:* ]]
then
POS=`expr index "$line" "/"`
PTH="/${line:$POS}"
if [[ "$PTH" == *www* ]]
then
#OK, have what I need here, now what?
echo $PTH;
fi
fi
done
# Somehow get $PTH here
stepB $1 | while read -r line; do
...
done
#somehow have the required strings here
I'm stuck in passing the PTH to the next step. I understand this is because piping runs it in a subshell, however all examples I've seen refer to to files and not commands, and I could not make this to work. I tried piping the echo to a "next step" such as
stepA | while ...
echo $PTH
done | while ...
#Got my var here, but cannot run stuff
done
How can I run stepA and have the PTH variable available for later?
Is there a "better way" to extract the path I need from the output than nested ifs ?
Thanks in advance!
Since you're using bash explicitly (in the shebang line), you can use its process substitution feature instead of a pipe:
while read -r line; do
if [[ "$line" == *:* ]]
.....
fi
done < <(stepA $1)
Alternately, you could capture the command's output to a string variable, and then parse that:
output="$(stepA $1)"
tmp="${output#*$'\nFiles available at: '}" # output with everything before the filepath trimmed
filepath="${tmp%%$'\n'*}" # trim the first newline and everything after it from $tmp
tmp="${output#*$'\nLogs available at: '}"
logpath="${tmp%%$'\n'*}"

Check execute command after cheking file type

I am working on a bash script which execute a command depending on the file type. I want to use the the "file" option and not the file extension to determine the type, but I am bloody new to this scripting stuff, so if someone can help me I would be very thankful! - Thanks!
Here the script I want to include the function:
#!/bin/bash
export PrintQueue="/root/xxx";
IFS=$'\n'
for PrintFile in $(/bin/ls -1 ${PrintQueue}) do
lpr -r ${PrintQueue}/${PrintFile};
done
The point is, all files which are PDFs should be printed with the lpr command, all others with ooffice -p
You are going through a lot of extra work. Here's the idiomatic code, I'll let the man page provide the explanation of the pieces:
#!/bin/sh
for path in /root/xxx/* ; do
case `file --brief $path` in
PDF*) cmd="lpr -r" ;;
*) cmd="ooffice -p" ;;
esac
eval $cmd \"$path\"
done
Some notable points:
using sh instead of bash increases portability and narrows the choices of how to do things
don't use ls when a glob pattern will do the same job with less hassle
the case statement has surprising power
First, two general shell programming issues:
Do not parse the output of ls. It's unreliable and completely useless. Use wildcards, they're easy and robust.
Always put double quotes around variable substitutions, e.g. "$PrintQueue/$PrintFile", not $PrintQueue/$PrintFile. If you leave the double quotes out, the shell performs wildcard expansion and word splitting on the value of the variable. Unless you know that's what you want, use double quotes. The same goes for command substitutions $(command).
Historically, implementations of file have had different output formats, intended for humans rather than parsing. Most modern implementations have an option to output a MIME type, which is easily parseable.
#!/bin/bash
print_queue="/root/xxx"
for file_to_print in "$print_queue"/*; do
case "$(file -i "$file_to_print")" in
application/pdf\;*|application/postscript\;*)
lpr -r "$file_to_print";;
application/vnd.oasis.opendocument.*)
ooffice -p "$file_to_print" &&
rm "$file_to_print";;
# and so on
*) echo 1>&2 "Warning: $file_to_print has an unrecognized format and was not printed";;
esac
done
#!/bin/bash
PRINTQ="/root/docs"
OLDIFS=$IFS
IFS=$(echo -en "\n\b")
for file in $(ls -1 $PRINTQ)
do
type=$(file --brief $file | awk '{print $1}')
if [ $type == "PDF" ]
then
echo "[*] printing $file with LPR"
lpr "$file"
else
echo "[*] printing $file with OPEN-OFFICE"
ooffice -p "$file"
fi
done
IFS=$OLDIFS

Send input to running shell script from bash

I'm writing a test suite for my app and using a bash script to check that the test suite output matches the expected output. Here is a section of the script:
for filename in test/*.bcs ;
do
./BCSC $filename > /dev/null
NUMBER=`echo "$filename" | awk -F"[./]" '{print $2}'`
gcc -g -m32 -mstackrealign runtime.c $filename.s -o test/e$NUMBER
# run the file and diff against expected output
echo "Running test file... "$filename
test/e$NUMBER > test/e$NUMBER.out
if [ $NUMBER = "4" ]
then
# it's trying to read the line
# Pass some input to the file...
fi
diff test/e$NUMBER.out test/o$NUMBER.out
done
Test #4 tests reading input from stdin. I'd like to test for script #4, and if so pass it a set of sample inputs.
I just realized you could do it like
test/e4 < test/e4.in > test/e4.out
where e4.in has the sample inputs. Is there another way to pass input to a running script?
If you want to supply the input data directly in the script, use a here-document:
test/e$NUMBER > test/e$NUMBER.out
if [ $NUMBER = "4" ]; then
then
test/e$NUMBER > test/e$NUMBER.out <<END_DATA
test input goes here
you can supply as many lines of input as you want
END_DATA
else
test/e$NUMBER > test/e$NUMBER.out
fi
There are several variants: if you quote the delimiter (i.e. <<'END_DATA'), it won't do things like replace $variable replacement in the here-document. If you use <<-DELIMITER, it'll remove leading tab characters from each line of input (so you can indent the input to match the surrounding code). See the "Here Documents" section in the bash man page for details.
The way you mentioned is the conventional method to redirect a file into stdin when issuing a command/script.
Maybe it'll help if you'll elaborate on the "other way" you're looking for, as in, why do you even need a different way to do so? Is there anything you need to do which this method does not allow?
You can do:
cat test/e4.in | test/e4 > test/e4.out

Resources