Command only getting first line of multiline here string - bash

I'm trying to pass a here string to a command that expects three values to be passed interactively. It seems like it should be simple enough, but for some reason, the program seems to only be receiving the first line of the here string properly and ignoring everything after the first \n.
Here is what I'm trying:
command <<< $'firstValue\nsecondValue\nthirdValue\n'
If anyone could tell me what I'm missing, I'd appreciate it greatly. I'm not sure if it's relevant or not, but the second value contains a space. I'm running this on a Mac.

I would maybe recommend setting up a while read for your here arguments:
#!/bin/bash
read -r -d '' vals <<EOT
first value
second value
third value
EOT
command <<< "$vals"
If you wanted to run the command each time on each argument:
while read -r src; do command "$src" ; done<<<"$vals"
Since you need the arguments run one at a time it might be easier to manage, and then you won't need to worry about the newline \n issues.

It turns out that the command I was passing the here string to couldn't handle the input fast enough from the here string. I ended up using the following workaround:
(printf 'value1\n'; sleep 2; printf 'value2\n'; sleep 2; printf 'value3\n') | command

Related

taking input in while loop overrides the same line in bash script

when i try to read command line arguments using while loop in bash it overwrites the prompt how i can prevent this problem and i cant remove the -r flag in read command if i do that i won't be able to use arrow keys
sample of code is here
while :
do
#easy life things
name=$(whoami)
prompt=$'\033[1;33m;-)\033[0m\033[1;31m'${name}$'\033[0m\033[1;34m#'$(hostname)$'\033[0m\033[1;32m>>\033[0m\033[1;31m🕸️ \033[0m' ;
echo -n -e "${blue}"
read -r -e -p "${prompt} " cmd
history -s ${cmd}
echo -n -e "${nc}"
//the code that got erased doesn't have any problem the problem is with the read command
done
i was expecting that it should not overwrite the same line without removing those two flags in read command
The script snippet you gave does not represent the environment or code which generated the conditions shown in the image provided.
What you show in the image is a condition some encounter when the PS1 prompt string is incorrectly defined, containing some malformed color-coding sequences.
You are facing the same conditions encountered in the situation presented in this other Question, where I gave a detailed explanation for the conditions there.

Read command line arguments with input redirection operator in bash

I need to read command line arguments. First arg is script name. second one is redirection operator i.e. "<" and third one is input filename. When I tried to use "$#", I got 0. When I used "$*", it gave me nothing. I have to use "<" this operator. My input file consists of all user input data. If I don't use the operator, It asks user for the input. Can someone please help me? Thank you !
Command Line :
./script_name < input_file
Script:
echo "$*" # gave nothing
echo "$#" # gave me 0
I need to read input filename and store it to some variable. Then I have to change the extension of it. Any help/suggestions should be appreciated.
When a user runs:
./script_name <input_file
...that's exactly equivalent to if they did the following:
(exec <input_file; exec ./script_name)
...first redirecting stdin from input_file, then invoking the script named ./script_name without any arguments.
There are operating-system-specific interfaces you can use to get the filename associated with a handle (when it has one), but to use one of these would make your script only able to run on an operating system providing that interface; it's not worth it.
# very, very linux-specific, won't work for "cat foo | ./yourscript", generally evil
if filename=$(readlink /proc/self/fd/0) && [[ -e $filename ]]; then
set -- "$#" "$filename" # append filename to the end of the argument list
fi
If you want to avoid prompting for input when an argument is given, and to have the filename of that argument, then don't take it on stdin but as an argument, and do the redirection yourself within the script:
#!/bin/bash
if [[ $1 ]]; then
exec <"$1" # this redirects your stdin to come from the file
fi
# ...put other logic here...
...and have users invoke your script as:
./script_name input_file
Just as ./yourscript <filename runs yourscript with the contents of filename on its standard input, a script invoked with ./yourscript filename which invokes exec <"$1" will have the contents of filename on its stdin after executing that command.
< is used for input redirection. And whatever is at the right side of < is NOT a command line argument.
So, when you do ./script_name < input_file , there will be zero (0) command line arguments passed to the script, hence $# will be zero.
For your puprpose you need to call your script as:
./script_name input_file
And in your script you can change the extension with something like:
mv -- "$1" "${1}_new_extension"
Edit: This was not what OP wanted to do.
Altough, there is already another spot on answer, I will write this for the sake of completeness. If you have to use the '<' redirection you can do something like this in your script.
while read filename; do
mv -- "$filename" "${filename}_bak"
done
And call the script as, ./script < input_file. However, note that you will not be able to take inputs from stdin in this case.
Unfortunately, if you're hoping to take redirection operators as arguments to your script, you're not going to be able to do that without surrounding your command line arguments in quotes:
./script_name "<input_file"
The reason for this is that the shell (at least bash or zsh) processes the command before ever invoking your script. When the shell interprets your command, it reads:
[shell command (./script_name)][shell input redirection (<input_file)]
invoking your script with quotes effectively results in:
[shell command (./script_name)][script argument ("<input_file")]
Sorry this is a few years late; hopefully someone will find this useful.

In bash script parameter tracking

I have this command in bash:
cmd -c -s junk text.txt
If I change the command to
cmd -c junk -s text.txt
how to I keep track which parameter ($2 or $3) is set to junk?.
I try to use a for loop but I don't know how to find out junk from $#.
You need to use getopts inside your script. Something like this should work:
while getopts "c:s:" optionName; do
case "$optionName" in
s) arg="$OPTARG"; echo "-s is present with [$arg]";;
c) arg="$OPTARG"; echo "-c is present with [$arg]";;
esac
done
From the example you show it seems that the -s option has a single argument and that is junk in the first example. However the semantics seems to change in the second example and there apparently -c takes a single argument and it is again junk. Also in the second example it seems -s takes text.txt as argument.
In general the arguments in bash commands do not have fixed positions but if an option takes an argument it should directly follow the option parameter(in the first case -s).
As pointed out by anubhava you may use getopts to parse the arguments for your script. Still this will not work for the case where you change the whole semantics like it seems you do.

Read content from stdout in realtime

I have an external device that I need to power up and then wait for it to get started properly. The way I want to do this is by connecting to it via serial port (via Plink which is a command-line tool for PuTTY) and read all text lines that it prints and try to find the text string that indicates that it has been started properly. When that text string is found, the script will proceed.
The problem is that I need to read these text lines in real-time. So far, I have only seen methods for calling a command and then process its output when that command is finished. Alternatively, I could let Plink run in the background by appending an & to the command and redirecting the output to a file. But the problem is that this file will be empty from the beginning so the script will just proceed directly. Is there maybe a way to wait for a new line of a certain file and read it once it comes? Or does anyone have any other ideas how to accomplish this?
Here is the best solution I have found so far:
./plink "connection_name" > new_file &
sleep 10 # Because I know that it will take a little while before the correct text string pops up but I don't know the exact time it will take...
while read -r line
do
# If $line is the correct string, proceed
done < new_file
However, I want the script to proceed directly when the correct text string is found.
So, in short, is there any way to access the output of a command continously before it has finished executing?
This might be what you're looking for:
while read -r line; do
# do your stuff here with $line
done < <(./plink "connection_name")
And if you need to sleep 10:
{
sleep 10
while read -r line; do
# do your stuff here with $line
done
} < <(./plink "connection_name")
The advantage of this solution compared to the following:
./plink "connection_name" | while read -r line; do
# do stuff here with $line
done
(that I'm sure someone will suggest soon) is that the while loop is not run in a subshell.
The construct <( ... ) is called Process Substitution.
Hope this helps!
Instead of using a regular file, use a named pipe.
mkfifo new_file
./plink "connection_name" > new_file &
while read -r line
do
# If $line is the correct string, proceed
done < new_file
The while loop will block until there is something to read from new_file, so there is no need to sleep.
(This is basically what process substitution does behind the scenes, but doesn't require any special shell support; POSIX shell does not support process substitution.)
Newer versions of bash (4.2 or later) also support an option to allow the final command of a pipeline to execute in the current shell, making the simple solution
shopt +s lastpipe
./plink "connection_name" | while read -r line; do
# ...
done
possible.

Shell Scripting: Generating output file name in script and then writing to it

I have a shell script where a user passes different sizes as command line arguments. For each of those sizes, I want to perform some task, then save the output of that task into a .txt file with the size in the name.
How can I take the command line passed and make that part of a stringname for the output file? I save the output file in a directory specified by another command line argument. Perhaps an example will clear it up.
In the foor lop, the i value represents the command line argument I need to use, but $$i doesnt work.
./runMe arg1 arg2 outputDir [size1 size2 size3...]
for ((i=4; i<$#; i++ ))
do
ping -s $$i google.com >> $outputDir/$$iresults.txt
done
I need to know how to build the $outputDir/$$iresults.txt string. Also, the ping -s $$i doesnt work. Its like I need two levels of replacement. I need to replace the $[$i] inner $i with the value in the loop, like 4 for ex, making it $4. Then replace $4 with the 4th command line argument!
Any help would be greatly appreciated.
Thanks!
Indirection uses the ! substitution prefix:
echo "${!i}"
But you should be using a bare for after shifting the earlier ones out:
shift
shift
for f
do
echo "$f"
done
for ARG in $#; do
$COMMAND > ${ARG}.txt
done

Resources