I tried to use herestrings to automate a script input and hence wanted to quit "more" of a file in it.
I've used:
$ echo q | more big_file.txt
$ more big_file.txt <<< q
$ yes q | more big_file.txt
$ echo -e "q\n" | more build.txt
but "more" command fails to quit.
Generally, the above mentioned input methods work for other commands in bash, but more seems to be exceptional,
Any idea what makes this a foul attempt?
NOTE: I don't want the data out of "more", but to quit "more" through an automated sequence is the target
When it detects it's running on a terminal, more only takes its input from it. This is so you can run things like:
$ cat multiple_files*.txt | more
(When it's not on a terminal, it doesn't even page, it degrades to behaving like cat.)
Seriously though, whatever your misguided reason for wanting to keep more in the loop is, you're not going to have it quit voluntarily with anything else than terminal input. So:
either you fake it. E.g. with expect
or you have it quit nonvoluntary. With a signal. Try SIGQUIT, move to SIGTERM, fall down to SIGKILL as a last resort.
Related
I began with playing ctfs challenges, and I encountered a problem where I needed to send an exploit into a binary and then interact with the spawned shell.
I found a solution to this problem which looks something like this:
(echo -ne "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\xbe\xba\xfe\xca" && cat) | nc pwnable.kr 9000
Meaning:
without the "cat" sub-command, I couldn't interact with the shell, but with it, i now able to send commands into the spawned shell and get the returned output to my console stdout.
What exactly happens there? this command line confuses me
If you just type in cat at the command line, you'll be able to see that this command simply copies stdin to stdout one line at a time. It will carry on doing this until you either quit with Ctrl-C or send an EOF with Ctrl-D.
In this example you're running cat immediately after successfully printing the payload (the concatenator && tells the shell to run the second command only if the first command has an exit code of zero; i.e., no error). As a result, the remote terminal won't see an EOF until you terminate it as described above. When this is piped to nc, everything you type in is sent via cat to the remote server, and everything it sends back appears on your stdout.
So yes, in effect you end up with an interactive shell. You can get pretty much the same effect on your own machine by running cat | sh.
I have a bash script that launches vim for me, after it reads some input from the command line.
I use it to edit scripts I have somewhere in the path, easily. It does some other things too, but the simplest form of it is the following:
vibin.sh
#!/usr/bin/env bash
PROG=$(which "$1")
vim "$PROG"
Often I'll be using this to edit something I recently ran, to make a quick adjustement.
For example, if I ran
$ my_script.sh a b c d
I might go back in the history, then insert vibin.sh at the start of the line to edit it.
$ vibin.sh myscript.sh a b c d
This works fine, but fails if my previous command was being piped to something, e.g.
$ vibin.sh myscript.sh a b c d | tee /tmp/out
Is there a way to make my script abort being in a pipeline, and allow this to work correctly? Currently vim gets into a weird state when I do this, which I can exit, but I'd prefer a better solution
Currently I can detect if I'm running in a pipeline, and abort, but i'd like to actually have it do what I wanted - edit the script!
# ensure we're not in a pipeline
if [ ! -t 1 ] ; then
exit 1;
fi
Try
vim "$PROG" < /dev/tty > /dev/tty
That is, force redirect standard input/output from/to the current console. A workaround against pipe/redirects.
I have a script which calls an infinite command which outputs data and will never stop. I want to check if that command outputs a specific string, say "Success!", and then I'd like to stop it and move to the next line on my script.
How is this possible?
Thanks in advance.
Edit:
#!/bin/bash
xrandr --output eDP-1-1 --mode 2048x1536
until steam steam://run/730 | grep "Game removed" ; do
:
done
echo "Success!"
xrandr --output eDP-1-1 --mode 3840x2160
All my script does is to change resolution, start a Steam game, and then wait for a message to appear in the output which indicates the game was closed and change the resolution back to what it was.
The message I've mentioned appears every time, but the Steam process is still running. Could that be the problem?
If I close Steam manually then the script continues like I want it to.
You need to use the until built-in construct for this,
until command-name | grep -q "Success!"; do
:
done
This will wait indefinitely until the string you want appears in stdout of your command, at which point grep succeeds and exits the loop. The -q flag in grep instructs it to run it silent and : is a short-hand for no-operation.
Difference between until and while - the former runs the loop contents until the exit-code of the command/operator used becomes 0 i.e. success, at which point it exits and while does the vice-versa of that.
I am running ubuntu 13.10 and want to write a bash script that will execute a given task at non-pre-determined time intervals. My understanding of this is that cronjobs require me to know when the task will be performed again. Thus, I was recommended to use "at."
I'm having a bit of trouble using "at." Based on some experimentation, I've found that
echo "hello" | at now + 1 minutes
will run in my terminal (with and without quotes). Running "atq" results in my computer telling me that the command is in the queue. However, I never see the results of the command. I assume that I'm doing something wrong, but the manpages don't seem to be telling me anything useful.
Thanks in advance for any help.
Besides the fact that commands are run without a terminal (output and input is probably redirected to /dev/null), your command would also not run since what you're passing to at is not echo hello but just hello. Unless hello is really an existing command, it won't run. What you want probably is:
echo "echo hello" | at now + 1 minutes
If you want to know if your command is really running, try redirecting the output to a file:
echo "echo hello > /var/tmp/hello.out" | at now + 1 minutes
Check the file later.
I want to execute Program_A, and have it output examined by Program_B. e.g.
$ Program_A | Program_B
Within Program_B, I would like to be able to terminate Program_A if a certain condition matched.
I am looking for a way to do this in BASH (have a solution in python with popen).
Also, pidof Program_A and alike are not a good solution since there can be many instances of it and only a particular one shall be terminated.
You can get the process id of Program_A in the Program_A itself and print that as the first line and other outputs to Program_B. If you do this then you can kill the Program_A with kill call from Program_B
In Program_B you could close() stdin when you detect the closing condition. It will cause Program_A to receive error upon write() and possibly terminate.
Now, the Program_A may choose to ignore those errors (I don't know if it's your own application or a pre-compiled tool) and here comes the funny part. In Program_B you can examine where does your stdin come from:
my_input=`readlink /proc/$$/fd/0`
then, if it's a something like "pipe:[134414]", find any other process whose stdout equals
case "$my_input" in
pipe:*)
for p in /proc/[0-9]*
do
if [ "$my_input" = "`readlink $p/fd/1`" ]
then
bad_guy=`sed -e 's:.*/::' <<< "$p"`
echo "Would you please finish, Mr. $bad_guy ?!"
kill $bad_guy
break
fi
done
;;
*)
echo "We're good";;
esac