Indenting "read" input when it contains multiple lines - bash

I have a read command in a bash script whose input defines an array. The input will frequently be copy/pasted data that contains multiple lines. Each line in the multi-line input is correctly captured and added to the array as separate elements, but I'd like to indent each line with a > prefix in the terminal window when it is pasted in.
This is for bash v3 running on macOS. I've attempted various flavors of the read command, but couldn't come across anything that worked.
Script:
#!/bin/bash
echo "Provide inputs:"
until [[ "$message" = "three" ]]; do
read -p "> " message
myArray+=($message) #Input added to array for later processing
done
Manually typed inputs look like this:
Provide inputs:
> one
> two
> three
But a copy/pasted multi-line input look like this:
Provide inputs:
> one
two
three
> >
The desired result is for the copy/pasted multi-line input to look identical to the manually entered inputs.

It sounds like the issue is with the way read works. Read echos back keystrokes and I think perhaps because of stdout buffer it is writtern before the echo statements are flushed.
Using a combo of echo command and the -e argument to read (interactive) fixes this in my testing.
#!/bin/bash
echo "Provide inputs:"
until [[ "$message" = "three" ]]; do
echo -ne "> "
read -e message
myArray+=($message) #Input added to array for later processing
done

(Answer changed after explanation of OP what he wants)
The screen will look identical when the input is entered line-by-line and when entered after copy-paste of multiple lines when you remove the > (I added -r in view of special characters).
until [[ "$message" = "three" ]]; do
read -r message
myArray+=("$message")
done
When you want to see the >, you can use the ugly
printf "> "
until [[ "$message" = "three" ]]; do
read -rs message
printf "%s\n> " "${message}"
myArray+=("$message")
done
In this case the input is only shown after an Enter, so this seems worse.

Related

Adding new lines to multiple files

I need to add new lines with specific information to one or multiple files at the same time.
I tried to automate this task using the following script:
for i in /apps/data/FILE*
do
echo "nice weather 20190830 friday" >> $i
done
It does the job yet I wish I can automate it more and let the script ask me for to provide the file name and the line I want to add.
I expect the output to be like
enter file name : file01
enter line to add : IWISHIKNOW HOWTODOTHAT
Thank you everyone.
In order to read user input you can use
read user_input_file
read user_input_text
read user_input_line
You can print before the question as you like with echo -n:
echo -n "enter file name : "
read user_input_file
echo -n "enter line to add : "
read user_input_text
echo -n "enter line position : "
read user_input_line
In order to add line at the desired position you can "play" with head and tail
head -n $[$user_input_line - 1] $user_input_file > $new_file
echo $user_input_text >> $new_file
tail -n +$user_input_line $user_input_file >> $new_file
Requiring interactive input is horrible for automation. Make a command which accepts a message and a list of files to append to as command-line arguments instead.
#!/bin/sh
msg="$1"
shift
echo "$msg" | tee -a "$#"
Usage:
scriptname "today is a nice day" file1 file2 file3
The benefits for interactive use are obvious -- you get to use your shell's history mechanism and filename completion (usually bound to tab) but also it's much easier to build more complicated scripts on top of this one further on.
The design to put the message in the first command-line argument is baffling to newcomers, but allows for a very simple overall design where "the other arguments" (zero or more) are the files you want to manipulate. See how grep has this design, and sed, and many many other standard Unix commands.
You can use read statement to prompt for input,
read does make your script generic, but if you wish to automate it then you have to have an accompanying expect script to provide inputs to the read statement.
Instead you can take in arguments to the script which helps you in automation.. No prompting...
#!/usr/bin/env bash
[[ $# -ne 2 ]] && echo "print usage here" && exit 1
file=$1 && shift
con=$1
for i in `ls $file`
do
echo $con >> $i
done
To use:
./script.sh "<filename>" "<content>"
The quotes are important for the content so that the spaces in the content are considered to be part of it. For filenames use quotes so that the shell does not expand them before calling the script.
Example: ./script.sh "file*" "samdhaskdnf asdfjhasdf"

Compare strings if contains in bash

I am trying to implement bash script that is reading from error log file and comparing strings with exceptions.
I am trying to compare it with if
error="[*] Text Text # level 4: 'Some text' [parent = 'Not found'] "
exception="'Not found'"
if [[ "${error}" == *"${exception}"* ]]; then
echo "Yes it contains!"
fi
In this case I would expect the script to return "Yes it contains!", but it doesn't work as I expected. But it is true that my logs contain special character as well, anyone knows how should I handle that and compare it?
For me my if also works, but I might have something wrong in my nested loop. I am running the script in following process.
I have file with errors called mylogfile.txt:
[*] Text Text # level 4: 'Some text' [parent = 'Not found']
Then I have another file where I have exceptions inserted exception.txt:
'Not found'
I do a loop over both files to see if I find anything:
while IFS='' read -r line || [ -n "$line" ]; do
exception="$line"
while IFS='' read -r line || [ -n "$line" ]; do
err="$line"
if [[ "${err}" == *"${exception}"* ]]; then
echo "Yes it contains!"
fi
done < "mylogfile.txt"
done < "exception.txt"
I don't see anything wrong with your script, and it works when I run it.
That said, looping over files line by line in a shell script is a code smell. Sometimes it's necessary, but often you can trick some command or another into doing the hard work for you. When searching through files, think grep. In this case, you can actually get rid of both loops with a single grep!
$ grep -f exception.txt mylogfile.txt
[*] Text Text # level 4: 'Some text' [parent = 'Not found']
To use it in an if statement, add -q to suppress its normal output and just check the exit code:
if grep -qf exception.txt mylogfile.txt; then
echo "Yes, it's contained!"
fi
From the grep(1) man page:
-f FILE, --file=FILE
Obtain patterns from FILE, one per line. The empty file contains zero patterns, and therefore matches nothing.
-q, --quiet, --silent
Quiet; do not write anything to standard output. Exit immediately with zero status if any match is found, even if an error was detected.
Use grep if you want exact match
if grep -q "$exception" <<< "$error"; then
echo "Yes it contains!"
fi
Use -i switch to ignore case

Read after 'while read lines' not evaluated (SHELL)

I am currently trying to debug some scripts I've made and I cannot find a way for a 'read' instruction to be executed.
To summarize, I've got two functions, one with a 'while read lines' that is called after a pipe, and another functions that read user input after while read is processed.
Let me now explain this with code :
This is how I called my function ($lines contains multiple lines separated with '\n')
echo "$lines" | saveLines
saveLines(){
# ...
while read line ; do
# processing lines
done
myOtherFunction
}
myOtherFunction(){
echo "I am here !" # <= This is printed in console
read -p "Type in : " tmp # <= Input is never asked to user, and the message is not printed
echo "I now am here !" # <= This is printed in console
}
This code is simplified but the spirit is here.
I tried to insert a 'read' instruction before the 'read -p ...', it did not seems to change things...
So please, if you can show my error or tell me why this behavior is expected, I would be very happy. Thanks for you time
This question is very close to that other question, in a slightly different context. To be more precise and as explained by the OP, the command run was
echo "$lines" | saveLines
meaning that the standard input of the code executed by saveLines wasn't the terminal anymore, but the same descriptor as the standard output of the echo... command.
To solve this it thus suffices to replace
…
read -p "Type in : " tmp
…
with
…
read -p "Type in : " tmp </dev/tty
…

shell parsing a line to look for a certain tag

I am planning to create a simple script to edit a file based on values stored within a properties file.
So essentially I am planning to loop through each line in the original file, when it comes across a certain tag within a line say "/#" it will get the text following the tag i.e. certs and then implement a function to parse through the properties file to get certain values and add them to the original file.
So for example the file would have the following line:
"/#certs"
I am not sure how best to search for the tag, I was planning to have an if to find the /# and then split the remaining text to get the string.
while read line
do
#need to parse line to look for tag
echo line >> ${NEW_FILE}
done < ${OLD_FILE}
Any help would e greatly appreciated
=====================================
EDIT:
My explanation was a bit poor; apologies. I am merely trying to get the text following the /# - i.e. I just want to get the string value that precedes it. I can then call a function based on what the text is.
You can use BASH regex capabilities:
while read line
do
if [[ "$line" =~ ^.*/#certs(.*)$ ]]; then
# do your processing here
# echo ${BASH_REMATCH[1]} is the part after /#certs
echo echo ${BASH_REMATCH[1]} >> ${NEW_FILE}
fi
done < ${OLD_FILE}
This is portable to Bourne shell and thus, of course, ksh and Bash.
case $line in
'/#'* ) tag="${line#/\#}" ;;
esac
To put it into some sort of context, here is a more realistic example of how you might use it:
while read line; do
case $line in
'/#'* ) tag="${line#/\#}" ;;
*) continue ;; # skip remainder of loop for lines without a tag
esac
echo "$tag"
# Or maybe do something more complex, such as
case $tag in
cert)
echo 'We have a cert!' >&2 ;;
bingo)
echo 'You are the winner.' >&2
break # terminate loop
;;
esac
done <$OLD_FILE >$NEW_FILE
For instance you can search for strings and manipulate them in one step using sed the stream editor.
echo $line | sed -rn 's:^.*/#(certs.+):\1:p'
This will print only the relevant parts after the /# of the relevant lines.

sentence as user input - multiple times from terminal - bash script

I am trying to send lines from terminal to a text file multiple times using the following script. After writing the first line and its description in 2nd line, the script asks user whether he wants to enter another line or not. If yes, then user writes the 3rd line, 4th line and so on...
my problem is that after 2nd line, i.e. starting from 3rd line, the script writes only the first word, not the full sentence. How do I solve this ?
function ml() {
echo $# >> $HOME/path/to/file/filename
echo -n "Enter description and press [ENTER]: "
read description
echo -e '\n[\t]' $description >> $HOME/path/to/file/myfile
while true
do
read -p "Add another line?y?n" -n 1 -r
echo -e "\n"
if [[ $REPLY =~ ^[Yy]$ ]]
then
echo -n "Enter another line and press [ENTER]: "
read -a meaning
echo -e "[\t]" $meaning >> $HOME/path/to/file/myfile
else
break
fi
done
echo % >> $HOME/path/to/file/myfile
}
also I would like to have another modification in the code
read -p "Add another line?y?n" -n 1 -r
instead of asking y/n input, can it be done that after inserting the first two line, every ENTER will ask for another line input and pressing ESCAPE will terminate the script?
This is because in your second call to read, you are using the -a argument which does:
The words are assigned to sequential indices of the array variable aname, starting at 0. aname is unset before any new values are assigned. Other name arguments are ignored.
That appears to be not what you want.

Resources