Why does ack over ssh not work? - bash

I've got a simple bash script to remove some folders on a remote server over ssh. It basically does this:
THE_HOST=12.34.56.78
ssh me#$THE_HOST "rm /the/file/path/thefile.zip"
This works perfectly well. Before I do this I often search the contents of the files in a folder for a string using ack:
ack thestring /the/folder/path/
This works perfect when I ssh into the server and run it, but when I use it in one command it doesn't work:
ssh me#$THE_HOST "ack thestring /the/folder/path/"
This seems to freeze or run forever: I get no output and the command never ends. Does anybody know why this doesn't work for ack?

Could be ack behaves differently when it is run in a terminal. Try using the -t argument
ssh -t me#$THE_HOST "ack thestring /the/folder/path/"
When ack detects that stdin is not a terminal(a tty device), it will attempt to read the text to search in from stdin instead of the given file/folder. That's what happens when you run it through ssh, stdin will be connected to the ssh connection, which does not look like a terminal(tty) to ack.
The -t argument to ssh instead allocates a tty and connects it to stdin/out of the program you run, ack will then think it runs in a terminal and instead use the file/folder argument for searching.
See http://github.com/beyondgrep/ack2/issues/659

Related

Redirect ssh output to file while performing other commands

I'm looking for some help with a script of mine. I'm new at bash scripting and I'm trying to start a service on a remote host with ssh and then capture all the output of this service to a file in my local host. The problem is that I also want to execute other commands after this one:
ssh $remotehost "./server $port" > logFile &
ssh $remotehost "nc -q 2 localhost $port < $payload"
Now, the first command starts an HTTP server that simply prints out any request that it receives, while the second command sends a request to such server.
Normally, if I were to execute the two commands on two separate shells I would get the first response on the terminal, but now I need it on the file.
I would like to have the server output all the requests on the log file, keeping a sort of open ssh connection to receive any new output of the server process.
I hope I made myself clear.
thank you for your help!
EDIT: Here's the output of the first command:
(Output is empty in the terminal... it waits for requests).
As you can see the commands doesn't return anything yet but it waits.
When I execute the second command on a new terminal (the request), the output of the first terminal is the following:
The request is displayed.
Now I would like to execute both commands in sequence in a bash script, sending the output of the first terminal (which is null until the second command is run) to a file so that ANY output, triggered by later issued requests, is sent to a file.
EDIT2: As of now, with the commands above, the server answers any requests but the output is not registered in the log file.

SSH command within a script terminates prematurely

From myhost.mydomain.com, I start a nc listener. Then login to another host to start a netcat push to my host:
nc -l 9999 > data.gz &
ssh repo.mydomain.com "cat /path/to/file.gz | nc myhost.mydomain.com 9999"
These two commands are part of a script. Only 32K bytes are sent to the host and the ssh command terminates, the nc listener gets an EOF and it terminates as well.
When I run the ssh command on the command line (i.e. not as part of the script) on myhost.mydomain.com the complete file is downloaded. What's going on?
I think there is something else that happens in your script which causes this effect. For example, if you run the second command in the background as well and terminate the script, your OS might kill the background commands during script cleanup.
Also look for set -o pipebreak which terminates all the commands in a pipeline when one of them returns with != 0.
On a second note, the approach looks overly complex to me. Try to reduce it to
ssh repo.mydomain.com "cat /path/to/file.gz" > data.gz
(ssh connects stdout of the remote with the local). It's more clear when you write it like this:
ssh > data.gz repo.mydomain.com "cat /path/to/file.gz"
That way, you can get rid of nc. As far as I know, nc is synchronous, so the second invocation (which sends the data) should only return after all the data has been sent and flushed.

Redirect stdout to a running process

I know how to redirect stdout to a file, but is it possible to redirect stdout to a process (linux environment)?
For example, if I move an active SSH session to the background via "~^Z", is there a way to then start another program on my local host and redirect its stdout to the SSH session?
Thanks!
Sometimes a trick like
echo "something" > /proc/pid-of-process/fd/0
works, but if they're linked to pseudoterminals, it won't.
So, here's one way to do what you want.
** Configure your SSH connection to use certificates / passwordless login.
Create a named pipe (eg mkfifo mypipe)
Use tail to read from the pipe and pass that to the SSH process, eg:
tail -f mypipe | ssh -t -t user#somehost.com
Send whatever you want to go into the ssh session into the named pipe, eg:
echo "ls -l" > mypipe
So if you need to pipe stuff from another program, you'd just do
./my-program > /path/to/mypipe
You're done.
Some notes:
** is optional, but if you don't, you will have to type your password on the terminal you start the SSH session on, you can't pass it through the pipe. In fact, if you try, it will just appear as plaintext passed through the pipe once the SSH connection completes.
Your SSH session is now only as secure as your named pipe. Just a heads up.
You won't be able to use the SSH session, once you connect, from the originating terminal. You'll have to use the named pipe.
You can always ctrl+c the SSH process in the original terminal to kill it and restore functionality of the terminal.
Any output will appear on the original terminal -- probably obvious but I just wanted to point it out.
The -f option to tail is necessary to prevent the SSH process from receiving an EOF when the pipe is empty. There are other ways to prevent the pipe from closing, but this is the easiest in my opinion.
The -t -t option to ssh forces tty allocation, otherwise it would complain since stdin is being piped in your case.
Lastly, there is almost definitely a better way to do what you want -- you should consider finding it if this is anything serious / long term.

Buffered pipe in bash

I'm running a Bukkit (Minecraft) server on a Linux machine and I want to have the server gracefully shut down using the server's stop command and the computer suspend at a certain time using pm-suspend from the command line. Here's what I've got:
me#comp~/dir$ perl -e 'sleep [time]; print "stop\\n";' | ./server && sudo pm-suspend
(I've edited by /etc/sudoers so I don't have to enter my password when I suspend.)
The thing is, while the perl -e is sleeping, the server is expecting a constant stream of bytes, (That's my guess. I could be misunderstanding something.) so it prints out all of the nothings it receives, taking up precious resources:
me#comp~/dir$ ...
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>...
Is there any such thing as a buffered pipe? If not, are there any ways to send delayed input to a script?
You may want to have a look at Bukkit's wiki, which recommends an init script for permanently running servers.
This init script uses rather unconventional approach to communicate with running server. The server is started in screen session, then all commands are send to the server console via screen, e.g.
screen -p 0 -S $SCREEN -X eval 'stuff \"stop\"\015'
See https://github.com/Ahtenus/minecraft-init/blob/master/minecraft
This approach suggest that bukkit may be expecting standard input to be attached to a terminal, thus requiring screen wrapper (which is itself terminal emulator) for unattended runs.

Script: SSH command execute and leave shell open, pipe output to file

I would like to execute a ssh command and pipe the output to a file.
In general I would do:
ssh user#ip "command" >> /myfile
the problem is that ssh close the connection once the command is executed, however - my command sends the output to the ssh channel via another programm in the background, therefore I am not receiving the output.
How can I treat ssh to leave my shell open?
cheers
sven
My understanding is that command starts some background process that perhaps will write some output to the terminal later. If command terminates before that the ssh session will be terminated and there will be no terminal for the background program to write to.
One simple and naive solution is to just sleep long enough
ssh user#ip "command; sleep 30m" >> /myfile
A better solution than sleep would be to wait for the background process(es) to finish in some more intelligent way, but that is impossible to say without further details.
Something more powerful than bash would be Python with Paramiko and PyExpect.

Resources