I want to know how we can provide inputs to command prompts which change. I want to use shell scripting
Example where '#' is usual prompt and '>' is a prompt specific to my program:
mypc:/home/usr1#
mypc:/home/usr1# myprogram
myprompt> command1
response1
myprompt> command2
response2
myprompt> exit
mypc:/home/usr1#
mypc:/home/usr1#
If I understood correctly, you want to send specific commands to your program myprogram sequentially.
To achieve that, you could use a simple expect script. I will assume the prompt for myprogram is noted with myprompt>, and that the myprompt> symbol does not appear in response1 :
#!/usr/bin/expect -f
#this is the process we monitor
spawn ./myprogram
#we wait until 'myprompt>' is displayed on screen
expect "myprompt>" {
#when this appears, we send the following input (\r is the ENTER key press)
send "command1\r"
}
#we wait until the 1st command is executed and 'myprompt>' is displayed again
expect "myprompt>" {
#same steps as before
send "command2\r"
}
#if we want to manually interract with our program, uncomment the following line.
#otherwise, the program will terminate once 'command2' is executed
#interact
To launch, simply invoke myscript.expect if the script is in the same folder as myprogram.
Given that myprogram is a script, it must be prompting for input with something like while read IT; do ...something with $IT ...;done . Hard to say exactly how to change that script without seeing it. echo -n 'myprompt> would be the simplest addition.
can be done with PS3 and select construct
#!/bin/bash
PS3='myprompt> '
select cmd in command1 command2
do
case $REPLY in
command1)
echo response1
;;
command2)
echo response2
;;
exit)
break
;;
esac
done
Or with echo and read builtins
prompt='myprompt> '
while [[ $cmd != exit ]]; do
echo -n "$prompt"
read cmd
echo ${cmd/#command/response}
done
Related
I have a script where I start a packet capture with tshark and then check whether the user has submitted an input text file.
If there is a file present, I need to run a command for every item in the file through a loop (while tshark is running); else continue running tshark.
I would also like some way to stop tshark with user input such as a letter.
Code snippet:
echo "Starting tshark..."
sleep 2
tshark -i ${iface} &>/dev/null
tshark_pid=$!
# if devices aren't provided (such as in case of new devices, start capturing directly)
if [ -z "$targets" ]; then
echo "No target list provided."
else
for i in $targets; do
echo "Attempting to deauthenticate $i..."
sudo aireplay-ng -0 $number -a $ap -c $i $iface$mon
done
fi
What happens here is that tshark starts, and only when I quit it using Ctrl+c does it move on to the if statement and subsequent loop.
Adding a & at the end of command executes the command in a new sub process. Mind you won't be able to kill it with ctlr + c
example:
firefox
will block the shell
firefox & will not block shell
I'm trying to come up with a way script to pass a silent flag in a bash so that all output will be directed to /dev/null if it is present and to the screen if it is not.
An MWE of my script would be:
#!/bin/bash
# Check if silent flag is on.
if [ $2 = "-s" ]; then
echo "Silent mode."
# Non-working line.
out_var = "to screen"
else
echo $1
# Non-working line.
out_var = "/dev/null"
fi
command1 > out_var
command2 > out_var
echo "End."
I call the script with two variables, the first one is irrelevant and the second one ($2) is the actual silent flag (-s):
./myscript.sh first_variable -s
Obviously the out_var lines don't work, but they give an idea of what I want: a way to direct the output of command1 and command2 to either the screen or to /dev/null depending on -s being present or not.
How could I do this?
You can use the naked exec command to redirect the current program without starting a new one.
Hence, a -s flag could be processed with something like:
if [[ "$1" == "-s" ]] ; then
exec >/dev/null 2>&1
fi
The following complete script shows how to do it:
#!/bin/bash
echo XYZZY
if [[ "$1" == "-s" ]] ; then
exec >/dev/null 2>&1
fi
echo PLUGH
If you run it with -s, you get XYZZY but no PLUGH output (well, technically, you do get PLUGH output but it's sent to the /dev/null bit bucket).
If you run it without -s, you get both lines.
The before and after echo statements show that exec is acting as described, simply changing redirection for the current program rather than attempting to re-execute it.
As an aside, I've assumed you meant "to screen" to be "to the current standard output", which may or may not be the actual terminal device (for example if it's already been redirected to somewhere else). If you do want the actual terminal device, it can still be done (using /dev/tty for example) but that would be an unusual requirement.
There are lots of things that could be wrong with your script; I won't attempt to guess since you didn't post any actual output or errors.
However, there are a couple of things that can help:
You need to figure out where your output is really going. Standard output and standard error are two different things, and redirecting one doesn't necessarily redirect the other.
In Bash, you can send output to /dev/stdout or /dev/stderr, so you might want to try something like:
# Send standard output to the tty/pty, or wherever stdout is currently going.
cmd > /dev/stdout
# Do the same thing, but with standard error instead.
cmd > /dev/stderr
Redirect standard error to standard output, and then send standard output to /dev/null. Order matters here.
cmd 2>&1 > /dev/null
There may be other problems with your script, too, but for issues with Bash shell redirections the GNU Bash manual is the canonical source of information. Hope it helps!
If you don't want to redirect all output from your script, you can use eval. For example:
$ fd=1
$ eval "echo hi >$a" >/dev/null
$ fd=2
$ eval "echo hi >$a" >/dev/null
hi
Make sure you use double quotes so that the variable is replaced before eval evaluates it.
In your case, you just needed to change out_var = "to screen" to out_var = "/dev/tty". And use it like this command1 > $out_var (see the '$' you are lacking)
I implemented it like this
# Set debug flag as desired
DEBUG=1
# DEBUG=0
if [ "$DEBUG" -eq "1" ]; then
OUT='/dev/tty'
else
OUT='/dev/null'
fi
# actual script use commands like this
command > $OUT 2>&1
# or like this if you need
command 2> $OUT
Of course you can also set the debug mode from a cli option, see How do I parse command line arguments in Bash?
And you can have multiple debug or verbose levels like this
# Set VERBOSE level as desired
# VERBOSE=0
VERBOSE=1
# VERBOSE=2
VERBOSE1='/dev/null'
VERBOSE2='/dev/null'
if [ "$VERBOSE" -gte 1 ]; then
VERBOSE1='/dev/tty'
fi
if [ "$VERBOSE" -gte 2 ]; then
VERBOSE2='/dev/tty'
fi
# actual script use commands like this
command > $VERBOSE1 2>&1
# or like this if you need
command 2> $VERBOSE2
Im tring to make a script to execute a set of commands from a file
the file for example has a set of 3 commands perl script-a, perl script-b, perl script-c, each command on a new line and i made this script
#!/bin/bash
for command in `cat file.txt`
do
echo $command
perl $command
done
The problem is that some scripts get stuck or takes too long to finish and i want to see their outputs. It is possible to make the bash script in case i send CTRL+C on the current command that is executed to jump to the next command in the txt file not to cancel the wole bash script.
Thank you
You can use trap 'continue' SIGINT to ignore Ctrl+c:
#!/bin/bash
# ignore & continue on Ctrl+c (SIGINT)
trap 'continue' SIGINT
while read command
do
echo "$command"
perl "$command"
done < file.txt
# Enable Ctrl+c
trap SIGINT
Also you don't need to call cat to read a file's contents.
#!/bin/bash
for scr in $(cat file.txt)
do
echo $scr
# Only if you have a few lines in your file.txt,
# Then, execute the perl command in the background
# Save the output.
# From your question it seems each of these scripts are independent
perl $scr &> $scr_perl_execution.out &
done
You can check each of the output to see if the command is doing as you expect. If not, you can use kill to terminate each of the command.
I'm writting a bash wrapper to learn some scripting concepts. The idea is to write a script in bash and set it as a user's shell at login.
I made a while loop that reads and evals user's input, and then noticed that, whenever user typed CTRL + C, the script aborted so the user session ends.
To avoid this, I trapped SIGINT, doing nothing in the trap.
Now, the problem is that when you type CTRL + C at half of a command, it doesn't get cancelled as one would do on bash - it just ignores CTRL + C.
So, if I type ping stockoverf^Cping stackoverflow.com, I get ping stockoverfping stackoverflow.com instead of the ping stackoverflow.com that I wanted.
Is there any way to do that?
#!/bin/bash
# let's trap SIGINT (CTRL + C)
trap "" SIGINT
while true
do
read -e -p "$USER - SHIELD: `pwd`> " command
history -s $command
eval $command
done
I know this is old as all heck, but I was struggling to do something like this and came up with this solution. Hopefully it helps someone else out!
#/usr/bin/env bash
# Works ok when it is invoked as a bash script, but not when sourced!
function reset_cursor(){
echo
}
trap reset_cursor INT
while true; do
command=$( if read -e -p "> " line ; then echo "$line"; else echo "quit"; fi )
if [[ "$command" == "quit" ]] ; then
exit
else
history -s $command
eval "$command"
fi
done
trap SIGINT
By throwing the read into a subshell, you ensure that it will get killed with a sigint signal. If you trap that sigint as it percolates up to the parent, you can ignore it there and move onto the next while loop. You don't have to have reset_cursor as its own function but I find it nice in case you want to do more complicated stuff.
I had to add the if statement in the subshell because otherwise it would ignore ctrl+d - but we want it to be able 'log us out' without forcing a user to type exit or quit manually.
You could use a tool like xdotool to send Ctrl-A (begin-of-line) Ctrl-K (delete-to-end-of-line) Return (to cleanup the line)
#!/bin/bash
trap "xdotool key Ctrl+A Ctrl+k Return" SIGINT;
unset command
while [ "$command" != "quit" ] ;do
eval $command
read -e -p "$USER - SHIELD: `pwd`> " command
done
trap SIGINT
The please have a look a bash's manual page, searching for ``debug'' keyword...
man -Pless\ +/debug bash
This question already has answers here:
How do I prompt for Yes/No/Cancel input in a Linux shell script?
(37 answers)
Closed 7 years ago.
I basically have a bash script which executes 5 commands in a row. I want to add a logic which asks me "Do you want to execute command A" and if I say YES, the command is executed, else the script jumps to another line and I see the prompt "Do you want to execute command B".
The script is very simple and looks like this
echo "Running A"
commandA &
sleep 2s;
echo "done!"
echo "Running B"
commandB &
sleep 2s;
echo "done!"
...
Use the read builtin to get input from the user.
read -p "Run command $foo? [yn]" answer
if [[ $answer = y ]] ; then
# run the command
fi
Put the above into a function that takes the command (and possibly the prompt) as an argument if you're going to do that multiple times.
You want the Bash read builtin. You can perform this in a loop using the implicit REPLY variable like so:
for cmd in "echo A" "echo B"; do
read -p "Run command $cmd? "
if [[ ${REPLY,,} =~ ^y ]]; then
eval "$cmd"
echo "Done!"
fi
done
This will loop through all your commands, prompt the user for each one, and then execute the command only if the first letter of the user's response is a Y or y character. Hope that helps!