I try to use telnet in a script (to be use in python program). I want to do the connection, send a command, and exit the connection in juste one line. The command i want to send is to start a program on a remote machine but I d'ont want to wait the end of this program to exit the telnet connection.
I try to do : "echo myCommand | netcat 192.168.1.50 23" but it waits the end of the program.
thanks for your help
Use bash builtin tcp socket feature:
echo yourCommand >/dev/tcp/192.168.1.50/23
Related
There is this program called sshuttle that can connects to a server and create a tunnel.
I wish to create a bash function that sequentially:
opens a tunnel to a remote server (sshuttle -r myhost 0/0),
performs 1 arbitrary commandline,
kill -s TERM <pidOfTheAboveTunnel>.
A basic idea (that works but the 5 seconds delay is a problem) is like sshuttle -r myhost 0/0 & ; sleep 5 ; mycommand ; kill -s TERM $(pgrep sshuttle)
Could expect be used to expect the string "c : Connected to server." that is received from stderr here? My attempts as a newbie were met with nothing but failure, and the man page is quite impressive.
When you use expect to control another program, it connects to that program through a pseudo-terminal (pty), so expect sees the same output from the program as you would on a terminal, in particular there is no distinction between stdout and stderr. Assuming that your mycommand is to be executed on the local machine, you could use something like this as an expect (not bash) script:
#!/usr/bin/expect
spawn sshuttle -r myhost 0/0
expect "Connected to server."
exec mycommand
exec kill [exp_pid]
close
The exec kill may not be needed if sshuttle exits when its stdin is closed, which will happen on the next line.
From myhost.mydomain.com, I start a nc listener. Then login to another host to start a netcat push to my host:
nc -l 9999 > data.gz &
ssh repo.mydomain.com "cat /path/to/file.gz | nc myhost.mydomain.com 9999"
These two commands are part of a script. Only 32K bytes are sent to the host and the ssh command terminates, the nc listener gets an EOF and it terminates as well.
When I run the ssh command on the command line (i.e. not as part of the script) on myhost.mydomain.com the complete file is downloaded. What's going on?
I think there is something else that happens in your script which causes this effect. For example, if you run the second command in the background as well and terminate the script, your OS might kill the background commands during script cleanup.
Also look for set -o pipebreak which terminates all the commands in a pipeline when one of them returns with != 0.
On a second note, the approach looks overly complex to me. Try to reduce it to
ssh repo.mydomain.com "cat /path/to/file.gz" > data.gz
(ssh connects stdout of the remote with the local). It's more clear when you write it like this:
ssh > data.gz repo.mydomain.com "cat /path/to/file.gz"
That way, you can get rid of nc. As far as I know, nc is synchronous, so the second invocation (which sends the data) should only return after all the data has been sent and flushed.
I need to give the user ability to send/receive messages over the network (using netcat) while the connection is stablished (the user, in this case, is using nc as client). The problem is that I need to send a line before user starts interacting. My first attempt was:
echo 'my first line' | nc server port
The problem with this approach is that nc closes the connection when echo finishes its execution, so the user can't send commands via stdin because the shell is given back to him (and also the answer from server is not received because it delays some seconds to start answering and, as nc closes the connection, the answer is never received by the user).
I also tried grouping commands:
{ echo 'my first line'; cat -; } | nc server port
It works almost the way I need, but if server closes the connection, it will wait until I press <ENTER> to give me the shell again. I need to get the shell back when the server closes the connection (in this case, the client - my nc command - will never closes the connection, except if I press Ctrl+C).
I also tried named pipes, without success.
Do you have any tip on how to do it?
Note: I'm using openbsd-netcat.
You probably want to look into expect(1).
It is cat that wait for the 'enter'.
You may write a script execute after nc to kill the cat and it will return to shell automatically.
You can try this to see if it works for you.
perl -e "\$|=1;print \"my first line\\n\" ; while (<STDIN>) {print;}" | nc server port
This one should produce the behaviour you want:
echo "Here is your MOTD." | nc server port ; nc server port
I would suggest you use cat << EOF, but I think it will not work as you expect.
I don't know how you can send EOF when the connection is closed.
I have a previously running process (process1.sh) that is running in the background with a PID of 1111 (or some other arbitrary number). How could I send something like command option1 option2 to that process with a PID of 1111?
I don't want to start a new process1.sh!
Named Pipes are your friend. See the article Linux Journal: Using Named Pipes (FIFOs) with Bash.
Based on the answers:
Writing to stdin of background process
Accessing bash command line args $# vs $*
Why my named pipe input command line just hangs when it is called?
Can I redirect output to a log file and background a process at the same time?
I wrote two shell scripts to communicate with my game server.
This first script is run when computer start up. It does start the server and configure it to read/receive my commands while it run in background:
start_czero_server.sh
#!/bin/sh
# Go to the game server application folder where the game application `hlds_run` is
cd /home/user/Half-Life
# Set up a pipe named `/tmp/srv-input`
rm /tmp/srv-input
mkfifo /tmp/srv-input
# To avoid your server to receive a EOF. At least one process must have
# the fifo opened in writing so your server does not receive a EOF.
cat > /tmp/srv-input &
# The PID of this command is saved in the /tmp/srv-input-cat-pid file
# for latter kill.
#
# To send a EOF to your server, you need to kill the `cat > /tmp/srv-input` process
# which PID has been saved in the `/tmp/srv-input-cat-pid file`.
echo $! > /tmp/srv-input-cat-pid
# Start the server reading from the pipe named `/tmp/srv-input`
# And also output all its console to the file `/home/user/Half-Life/my_logs.txt`
#
# Replace the `./hlds_run -console -game czero +port 27015` by your application command
./hlds_run -console -game czero +port 27015 > my_logs.txt 2>&1 < /tmp/srv-input &
# Successful execution
exit 0
This second script it just a wrapper which allow me easily to send commands to the my server:
send.sh
half_life_folder="/home/jack/Steam/steamapps/common/Half-Life"
half_life_pid_tail_file_name=hlds_logs_tail_pid.txt
half_life_pid_tail="$(cat $half_life_folder/$half_life_pid_tail_file_name)"
if ps -p $half_life_pid_tail > /dev/null
then
echo "$half_life_pid_tail is running"
else
echo "Starting the tailing..."
tail -2f $half_life_folder/my_logs.txt &
echo $! > $half_life_folder/$half_life_pid_tail_file_name
fi
echo "$#" > /tmp/srv-input
sleep 1
exit 0
Now every time I want to send a command to my server I just do on the terminal:
./send.sh mp_timelimit 30
This script allows me to keep tailing the process on your current terminal, because every time I send a command, it checks whether there is a tail process running in background. If not, it just start one and every time the process sends outputs, I can see it on the terminal I used to send the command, just like for the applications you run appending the & operator.
You could always keep another open terminal open just to listen to my server server console. To do it just use the tail command with the -f flag to follow my server console output:
./tail -f /home/user/Half-Life/my_logs.txt
If you don't want to be limited to signals, your program must support one of the Inter Process Communication methods. See the corresponding Wikipedia article.
A simple method is to make it listen for commands on a Unix domain socket.
For how to send commands to a server via a named pipe (fifo) from the shell see here:
Redirecting input of application (java) but still allowing stdin in BASH
How do I use exec 3>myfifo in a script, and not have echo foo>&3 close the pipe?
You can use the bash's coproc comamnd. (avaliable only in 4.0+) - it's like ksh's |&
check this for examples http://wiki.bash-hackers.org/syntax/keywords/coproc
you can't send new args to a running process.
But if you are implementing this process or its a process that can take the args from a pipe, then the other answer would help.
I would like to execute a ssh command and pipe the output to a file.
In general I would do:
ssh user#ip "command" >> /myfile
the problem is that ssh close the connection once the command is executed, however - my command sends the output to the ssh channel via another programm in the background, therefore I am not receiving the output.
How can I treat ssh to leave my shell open?
cheers
sven
My understanding is that command starts some background process that perhaps will write some output to the terminal later. If command terminates before that the ssh session will be terminated and there will be no terminal for the background program to write to.
One simple and naive solution is to just sleep long enough
ssh user#ip "command; sleep 30m" >> /myfile
A better solution than sleep would be to wait for the background process(es) to finish in some more intelligent way, but that is impossible to say without further details.
Something more powerful than bash would be Python with Paramiko and PyExpect.