I'm trying to pass a program output as input to another program, but I don't know how to make it wait until the first one is finished.
npm run test | ./script.js
I'm trying to run that, but when I run it, tests are starting and the script is throwing an error immediately
the pipe gets the standard output directly to standard input, you can use an subshell to execute first then get the standard output into the standard input.
./script.js <<< "$(npm run test)"
other way to do this is to store the output into an temp_file then use this file as input of the script
nmp run test >> temp_file ; ./script.js < temp_file
Related
I began with playing ctfs challenges, and I encountered a problem where I needed to send an exploit into a binary and then interact with the spawned shell.
I found a solution to this problem which looks something like this:
(echo -ne "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\xbe\xba\xfe\xca" && cat) | nc pwnable.kr 9000
Meaning:
without the "cat" sub-command, I couldn't interact with the shell, but with it, i now able to send commands into the spawned shell and get the returned output to my console stdout.
What exactly happens there? this command line confuses me
If you just type in cat at the command line, you'll be able to see that this command simply copies stdin to stdout one line at a time. It will carry on doing this until you either quit with Ctrl-C or send an EOF with Ctrl-D.
In this example you're running cat immediately after successfully printing the payload (the concatenator && tells the shell to run the second command only if the first command has an exit code of zero; i.e., no error). As a result, the remote terminal won't see an EOF until you terminate it as described above. When this is piped to nc, everything you type in is sent via cat to the remote server, and everything it sends back appears on your stdout.
So yes, in effect you end up with an interactive shell. You can get pretty much the same effect on your own machine by running cat | sh.
Specifics:
I'm trying to build a bash script which needs to do a couple of things.
Firstly, it needs to run a third party script that I cannot manipulate. This script will build a project and then start a node server which outputs data to the terminal continually. This process needs to continue indefinitely so I can't have any exit codes.
Secondly, I need to wait for a specific line of output from the first script, namely 'Started your app.'.
Once that line has been output to the terminal, I need to launch a separate set of commands, either from another subscript or from an if or while block, which will change a few lines of code in the project that was built by the first script to resolve some dependencies for a later step.
So, how can I capture the output of the first subscript and use that to run another set of commands when a particular line is output to the terminal, all while allowing the first script to run in the terminal, and without using timers and without creating a huge file from the output of subscript1 as it will run indefinitely?
Pseudo-code:
#!/usr/bin/env bash
# This script needs to stay running & will output to the terminal (at some point)
# a string that we need to wait/watch for to launch subscript2
sh subscript1
# This can't run until subscript1 has output a particular string to the terminal
# This could be another script, or an if or while block
sh subscript2
I have been beating my head against my desk for hours trying to get this to work. Any help would be appreciated!
I think this is a bad idea — much better to have subscript1 changed to be automation-friendly — but in theory you can write:
sh subscript1 \
| {
while IFS= read -r line ; do
printf '%s\n' "$line"
if [[ "$line" = 'Started your app.' ]] ; then
sh subscript2 &
break
fi
done
cat
}
I am trying to start a number of jobs in different screens from a shell script. Each job will read in a different value of a parameter from a premade input file and run a simulation based on that value, then tee or > the output to a differently named file. So in a do loop around all the jobs, job 40 on screen "session40" will read in line 40 of the input file, run the simulation, and output to output40.dat, for example. (I am basically trying to run a few jobs in parallel in a very elementary way; it appears my computer has plenty of RAM for this).
I am encountering the issue that the > and | tee commands do not seem to work when I use "exec" to run a command on the remote screen, despite having attempted to start a bash shell there; when I use these commands, it just prints to standard output. Although these commands do work with the command "stuff," I do not know how to pass the job number to stuff, as it appears to only work with string inputs.
The current attempted script is as follows. I have replaced the simulation script with echo and > for a simpler example of the problem. Neither of the last two screen lines work.
for i in 1:10; do
screen -Sdm session$i bash
screen -S session$i -X exec echo $i > runnumber$i.output (method 1)
screen -S session$i -X stuff $'echo $i > runnumber$i.output\r' (method 2)
done
Might there be an easy fix?
i want to implement a shell-script that runs the command xyz and stores its output in a variable, but at the same time forwarding the commands output to the shell-scripts stdout.
This is because I want to launch this script via launchd, let it automatically log the script's output, but then also let the script push the individual commands output to the web. The script should not simply buffer the commands output and print it after it ran, but rather in real time.
Is something like this possible, and if, how do you implement it?
Thanks
thel30n
you are looking for the command:
$VAR=$(echo 'test' | tee /dev/tty)
test
echo $VAR
test
I believe there is no way to save log into a shell variable and avoid buffering command's output at the same time. But the alternative way would be saving log messages to a file using tee(1). For example:
LOGFILE=/path/to/logfile
run_and_log() {
$# | tee -a "$LOGFILE"
}
run_and_log xyz
I have a bash script with the following command
command1 < input.txt > output.txt
When I run it, the output.txt file is created and filled with almost 5mb of data. The problem is that when its it's run by cron it doesn't work. It creates the file output.txt but it's empty. I believe that what's happening is that it's not reading the input from input.txt so command1 exits right away, creating the output file but don't printing anything.
Anyone knows what is happening and how can it be fixed?
EDIT: After trying a lot of options it appears that the problem is that the cron is configured to redirect stdin, so no matter if I redirect stdin or if I pipe processes, nothing is able to read anything from stdin. ¿Any solution?
login as superuser and invoke your command with 'env'.
env -i command1 < input.txt > output.txt
With this you run the command (nearly) in the condition cron has.
You then will get error messages decribing your problem.