Makefile process multiline output - makefile

I have a command which outputs lines. I would like to process each line in parallel with Make.
target:
command > $#
so now, I have a file called "target". In shell, I would do
while read -r line; do
echo "$line"
done < target

Related

Shell script can read file line by line but not perform actions for each line

I'm trying to run this command over multiple machines
sshpass -p 'nico' ssh -o 'StrictHostKeyChecking=no' nico#x.x.x.x "mkdir test"
The IPs are stored in the following .txt file
$ cat ips.txt
10.0.2.15
10.0.2.5
I created a bash script that reads this file line by line. If I run it with an echo:
#!/bin/bash
input="ips.txt"
while IFS= read -r line
do
echo "$line"
#sshpass -p 'nico' ssh -o 'StrictHostKeyChecking=no' nico#$line "mkdir test"
done < "$input"
It prints every line:
$ ./variables.sh
10.0.2.15
10.0.2.5
This makes me understand that the script is working as intended. However, when I replace the echo line with the command I want to run for each line:
#!/bin/bash
input="ips.txt"
while IFS= read -r line
do
#echo "$line"
sshpass -p 'nico' ssh -o 'StrictHostKeyChecking=no' nico#$line "mkdir test"
done < "$input"
It only performs the action for the first IP on the file, then stops. Why?
Managed to solve this by using a for instead of a while. Script ended up looking like this:
for file in $(cat ips.txt)
do
sshpass -p 'nico' ssh -o 'StrictHostKeyChecking=no' nico#$file "mkdir test"
done
While your example is a solution that works, it's not the explanation.
Your could find the explanation here : ssh breaks out of while-loop in bash
In two words :
"while" loop continue reading from the same file-descriptor that defined in the loop header ( $input in your case )
ssh (or sshpass) read data from stdin (but in your case from file descriptor $input). And here is the point that hide the things as we didn't exect "ssh" to read the data.
Just to understand the problem you could have same strange experience for example using commands like "ffmpeg" or "mplayer" in while loop. Mplayer and ffmpeg use the keyboards while they are running, so they will consume all the the file-descriptor.
Another good and funny example :
#!/bin/bash
{
echo first
for ((i=0; i < 16384; i++)); do echo "testing"; done
echo "second"
} > test_file
while IFS= read -r line
do
echo "Read $line"
cat | uptime > /dev/null
done < test_file
At first part we write 1st line : first
16384 lines : testing
then last line : second
16384 lines "testing" are equal to 128Kb buffer
At the second part, the command "cat | uptime" will consume exactly 128Kb buffer, so our script will give
Read first
Read second
As solution, as you did, we could use "for" loop.
Or use "ssh -n"
Or playing with some file descriptor - you could find the example in the link that I gave.

Issue with echo output to file in bash

I am running into an issue trying to echo some strings into files inside a shell script.
I am calling the shell script through pyspark/spark's rdd.pipe() command, and I checked to make sure the input is coming through by echoing each line in the shell script.
Here is the shell script code:
#!/bin/sh
while read -r one; do
read -r two
read -r three
read -r four
read -r five
read -r six
read -r seven
read -r eight
echo -e "$one\n$two\n$three\n$four\n" >> 1.txt
echo -e "$five\n$six\n$seven\n$eight\n" >> 2.txt
done
I ran the echo command WITHOUT piping to a file and that showed up in the output back in my spark program. The input to the shell script is just strings. Does anyone have ideas why 1.txt and 2.txt aren't being written to?

Read from .txt file containing executable commands to be executed w/ output of commands executed sent to another file

When I run my script, the .txt file is read, the executable commands are assigned to $eggs, then to execute the commands and redirect the output to a file I use echo $eggs>>eggsfile.txt but when I cat the file, I just see all the commands and not the execution output of those commands.
echo "Hi, $USER"
cd ~/mydata
echo -e "Please enter the name of commands file:/s"
read fname
if [ -z "$fname" ]
then
exit
fi
terminal=`tty`
exec < $fname #exec destroys current shell, opens up new shell process where FD0 (STDIN) input comes from file fname
count=1
while read line
do
echo $count.$line #count line numbers
count=`expr $count + 1`; eggs="${line#[[:digit:]]*}";
touch ~/mydata/eggsfile.txt; echo $eggs>>eggsfile.txt; echo "Reading eggsfile contents: $(cat eggsfile.txt)"
done
exec < $terminal
If you just want to execute the commands, and log the command name before each command, you can use 'sh -x'. You will get '+ command' before each command.
sh -x commands
+pwd
/home/user
+ date
Sat Apr 4 21:15:03 IDT 2020
If you want to build you own (custom formatting, etc), you will have to force execution of each command. Something like:
cd ~/mydata
count=0
while read line ; do
count=$((count+1))
echo "$count.$line"
eggs="${line#[[:digit:]]*}"
echo "$eggs" >> eggsfile.txt
# Execute the line.
($line) >> eggsfile.txt
done < $fname
Note that this approach uses local redirection for the while loop, avoiding having to revert the input back to the terminal.

Safe shell redirection when command not found

Let's say we have a text file named text (doesn't matter what it contains) in current directory, when I run the command (in Ubuntu 14.04, bash version 4.3.11):
nocommand > text # make sure noommand doesn't exists in your system
It reports a 'command not found' error and it erases the text file! I just wonder if I can avoid the clobber of the file if the command doesn't exist.
I try this command set -o noclobber but the same problem happens if I run:
nocommand >| text # make sure noommand doesn't exists in your system
It seems that bash redirects output before looking for specific command to run. Can anyone give me some advices how to avoid this?
Actually, the shell first looks at the redirection and creates the file. It evaluates the command after that.
Thus what happens exactly is: Because it's a > redirection, it first replaces the file with an empty file, then evaluates a command which does not exist, which produces an error message on stderr and nothing on stdout. It then stores stdout in this file (which is nothing so the file remains empty).
I agree with Nitesh that you simply need to check if the command exists first, but according to this thread, you should avoid using which. I think a good starting point would be to check at the beginning of your script that you can run all the required functions (see the thread, 3 solutions), and abort the script otherwise.
Write to a temporary file first, and only move it into place over the desired file if the command succeeds.
nocommand > tmp.txt && mv tmp.txt text
This avoids errors not only when nocommand doesn't exist, but also when an existing command exits before it can finish writing its output, so you don't overwrite text with incomplete data.
With a little more work, you can clean up the temp file in the event of an error.
{ nocommand > tmp.txt || { rm tmp.txt; false; } } && mv tmp.txt text
The inner command group ensures that the exit status of the outer command group is non-zero so that even if the rm succeeds, the mv command is not triggered.
A simpler command that carries the slight risk of removing the temp file when nocommand succeeds but the mv fails is
nocommand > tmp.txt && mv tmp.txt text || rm tmp.txt
This would write to file only if the pipe sends at least a single character:
nocommand | (
IFS= read -d '' -n 1 || exit
exec >myfile
[[ -n $REPLY ]] && echo -n "$REPLY" || printf '\x00'
exec cat
)
Or using a function:
function protected_write {
IFS= read -d '' -n 1 || exit
exec >"$1"
[[ -n $REPLY ]] && echo -n "$REPLY" || printf '\x00'
exec cat
}
nocommand | protected_write myfile
Note that if lastpipe option is enabled, you'll have to place it on a subshell:
nocommand | ( protected_write myfile )
At your option you can also just summon subshell on the function by default:
function protected_write {
(
IFS= read -d '' -n 1 || exit
exec >"$1"
[[ -n $REPLY ]] && echo -n "$REPLY" || printf '\x00'
exec cat
)
}
() summons a subshell. A subshell is a fork and runs on a different process space. In x | y, y is also summoned by default in a subshell unless lastpipe option (try shopt lastpipe) is enabled.
IFS= read -d '' -n 1 waits for a single character (see help read) and would return zero code when it reads one which bypasses exit.
exec >"$1" redirects stdout to file. This makes everything that prints to stdout print to file instead.
Everything besides \x00 when read is stored in REPLY that is why we do printf '\x00' when REPLY has null (empty) value.
exec cat replaces the subshell's process with cat which would send everything that it receives to the file and finish the remaining job. See help exec.
If you do:
set -o noclobber
then
invalidcmd > myfile
if myfile exists in current path then you will get:
-bash: myfile: cannot overwrite existing file
Check using "which" command
#!/usr/bin/env bash
command_name="npm2" # Add your command here
command=`which $command_name`
if [ -z "$command" ]; then #if command exists go ahead with your logic
echo "Command not found"
else # Else fallback
echo "$command"
fi
Hope this helps

pass a command as an argument to bash script

How do I pass a command as an argument to a bash script?
In the following script, I attempted to do that, but it's not working!
#! /bin/sh
if [ $# -ne 2 ]
then
echo "Usage: $0 <dir> <command to execute>"
exit 1;
fi;
while read line
do
$($2) $line
done < $(ls $1);
echo "All Done"
A sample usage of this script would be
./myscript thisDir echo
Executing the call above ought to echo the name of all files in the thisDir directory.
First big problem: $($2) $line executes $2 by itself as a command, then tries to run its output (if any) as another command with $line as an argument to it. You just want $2 $line.
Second big problem: while read ... done < $(ls $1) doesn't read from the list of filenames, it tries to the contents of a file specified by the output of ls -- this will fail in any number of ways depending on the exact circumstances. Process substitution (while read ... done < <(ls $1)) would do more-or-less what you want, but it's a bash-only feature (i.e. you must start the script with #!/bin/bash, not #!/bin/sh). And anyway it's a bad idea to parse ls, you should almost always just use a shell glob (*) instead.
The script also has some other potential issues with spaces in filenames (using $line without double-quotes around it, etc), and weird stylistic oddities (you don't need ; at the end of a line in shell). Here's my stab at a rewrite:
#! /bin/sh
if [ $# -ne 2 ]; then
echo "Usage: $0 <dir> <command to execute>"
exit 1
fi
for file in "$1"/*; do
$2 "$file"
done
echo "All done"
Note that I didn't put double-quotes around $2. This allows you to specify multiword commands (e.g. ./myscript thisDir "cat -v" would be interpreted as running the cat command with the -v option, rather than trying to run a command named "cat -v"). It would actually be a bit more flexible to take all arguments after the first one as the command and its argument, allowing you to do e.g. ./myscript thisDir cat -v, ./myscript thisDir grep -m1 "pattern with spaces", etc:
#! /bin/sh
if [ $# -lt 2 ]; then
echo "Usage: $0 <dir> <command to execute> [command options]"
exit 1
fi
dir="$1"
shift
for file in "$dir"/*; do
"$#" "$file"
done
echo "All done"
your command "echo" command is "hidden" inside a sub-shell from its argments in $line.
I think I understand what your attempting in with $($2), but its probably overkill, unless this isn't the whole story, so
while read line ; do
$2 $line
done < $(ls $1)
should work for your example with thisDir echo. If you really need the cmd-substitution and the subshell, then put you arguments so they can see each other:
$($2 $line)
And as D.S. mentions, you might need eval before either of these.
IHTH
you could try: (in your codes)
echo "$2 $line"|sh
or the eval:
eval "$2 $line"

Resources