pager (less) -- get current scroll position? - shell

I am scripting the display of the output of a script (well, it is just the program git diff) with tmux: Once a filesystem change is detected the shell script executes tmux send-keys q enter C-l "git diff" enter which has it effectively refresh the git diff view.
You might consider this similar to functionality provided by iTerm's coprocesses.
Problem is, I want it on refresh to scroll back to the same position that it was in.
One of the reasons for using tmux is that the window is actually a totally normal and interactive terminal session that can be interacted with as normal to scroll around to look at the full output.
But I want to obtain the scroll position somehow.
Suppose I want to actually do computation on the text content of the terminal window itself, exactly like iTerm2's coprocess does, but so that I can use it on Linux (over ssh). Does tmux provide this ability?

I'm unsure about capturing this with a script, but less -N will show line numbers.
And -jn or --jump-target=n can jump to a location.

About iTerm's coprocesses,
tmux has a command pipe-pane that can be used to pipe the input and output of a shell command to the output and input of a target pane specified by -t.
So if I have a shell program, ~/script.sh for example:
#!/usr/bin/env bash
while read line; do
if [[ "$line" = "are you there?"* ]]; then
echo "echo yes"
fi
done
Note that read line will read every line printed to the pane, i.e. the prompt as well.
I can connect its stdin and stdout to pane 0 in my-session:my-window like so:
tmux pipe-pane -IO -t my-session:my-window.0 "~/script.sh"
and then at the prompt, type echo are you there?, and it responds:
$ echo are you there?
are you there?
$ echo yes
yes
Be careful using -IO together as it can easily cause a feedback loop (I had to kill the tmux server a few times while experimenting with this feature, lol).

Related

tmux: run command in pane and capture result, in bash script

Using tmux, I'd like to run a command from one pane against another pane, and capture the output of the command.
For example, say in pane 7 I have an SSH session running, and I'd like to run a bash script in pane 2 to capture the host name from pane 7.
Is this possible?
I know I can do the send keys like so
$ tmux send-keys -t 7 "hostname" Enter
but I'm not sure how to capture the output from pane 7 into a bash variable.
I don't mind if it displays on the screen either (doesnt have to happen in the background).
EDIT: Note that hostname is just an example - I would like to run other scripts against each pane as well
As an alternative to capture-pane you can similarly use pipe-pane. It is often used for logging. You give it a command to pipe all output into, or no command at all to stop piping. So you end up with something like
tmux pipe-pane -t 7 'cat >/tmp/capture'
tmux send-keys -t 7 'hostname' Enter
sleep 1
tmux pipe-pane -t 7 # stops piping
IT=$(</tmp/capture)
Beware, this capture includes carriage-return characters. You will need to remove the first and last lines of the capture to get just the wanted output. Eg:
IT=$(sed '1d;$d;s/\r//' </tmp/capture)
While not exactly what I was looking for, the tmux capture-pane command seems to capture the displayed output of a pane, which I can run of course after running my command.
example.sh
#/bin/bash
# run the command in another pane
tmux send-keys -t 7 "hostname" Enter
# capture it's output
IT=$(tmux capture-pane -p -t 7)
echo "$IT"

SSH - terminal misbehaving when SSH invoked from a `while read` loop

I'm coding a Bash script to automate tasks across multiple servers.
I am logging to a Centos 7 machine over SSH to run some editor (nano, vi, ...)
ssh -tt centos#... '/bb/Conf edit'
The /bb/Conf edit is basically just vi /bb/conf.yaml.
When I run the SSH command from my shell, it works fine. However, when the same SSH command is ran from a Bash script inside a while read ...; do loop, the editor has wrong size (80x40 I guess) and seems to ignore the keys I press - i.e. in nano, Ctrl+x doesn't do anything. The only key that works is Ctrl+c which closes the connection.
I thought this is something related to the TERM variable, as per this, so I tried to add export TERM=xterm or TERM=rxvt to /bb/Conf or the place calling the SSH. The variable is in fact set in the target environment (I've tried echo $TERM right before vi). But the terminal still misbehaved.
Then I have tried to put just that single command ssh ... to a new script. When running that, the editor worked fine.
After a while I found out that it works outside a while read loop, but not inside. I assume that the editors do some stdin/stdout magic and then read somehow breaks that.
Is there a way to run an editor like vi or nano from within a loop?
(The purpose in my case is to allow the users to edit files on multiple servers.)
That's because both read and ssh are reading from the same input stream. The solution is to use a different file descriptor for the while read loop:
while IFS= read -r -u3 line; do
ssh ...
done 3< file
Here, we're using file descriptor 3 instead of stdin.
Lengthy pipelines can be hard to read and maintain, but you can use whitespace constructively: newlines are allowed following | and && and ||. Also, parentheses introduce a subshell which contains an arbitrary script, so indentation helps.
while read -u3 line; do
: do stuff here that needs to read from stdin
done 3< <(
command 1 of the pipeline |
command 2 |
command 3
)
That's clean and readable. The downside is that it puts the last part of the pipeline (the while loop) first, so the code kind of flows backwards.

How can I start an ssh session with a script without redirecting stdin?

I have a series of bash commands, some with interactive prompts, that I need run on a remote machine. I have to have them called in a certain order for different scenarios, so I've been trying to make a bash script to automate the process for me. However, it seems like every way to start an ssh session with a bash script results in the the redirection of stdin to whatever string or file was used to initiate the script in the first place.
Is there a way I can specify that a certain script be executed on a remote machine, but also forward stdin through ssh to the local machine to enable the user to interact with any prompts?
Here's a list of requirements I have to clarify what I'm trying to do.
Run a script on a remote machine.
Somewhere in the middle of that remote script be command that will prompt for input. Example: git commit will bring up vim.
If that command is git commit and it brings up vim, the user should be able to interact with vim as if it was running locally on their machine.
If that command prompts for a [y/n] response, the user should be able to input their answer.
After the user enters the necessary information—by quitting vim or pressing return on a prompt—the script should continue to run like normal.
My script will then terminate the ssh session. The end product is that commands were executed for the user without them needing to be aware that it was through a remote connection.
I've been testing various different methods with the following script that I want run on the remote machine.
#!/bin/bash
echo hello
vim
echo goodbye
exit
It's crucial that the user be able to use vim, and then, when the user finishes, "goodbye" should be printed to the screen and the remote session should be terminated.
I've tried uploading a temporary script to the remote machine and then running ssh user#host bash /tmp/myScript, but that seems to also take over stdin completely, rendering it impossible to let the user respond to prompts for user input. I've tried adding the -t and -T options (I'm not sure if they're different), but I still get the same result.
One commenter mentioned using expect, spawn, and interact, but I'm not sure how to use those tools together to get my desired behavior. It seems like interact will result in the user gaining control over stdin, but then there's no way to have it relinquished once the user quits vim in order to let my script continue execution.
Is my desired behavior even possible?
Ok, I think I've found my problem. I was creating a wrapper script for ssh that looked like this:
#!/bin/bash
tempScript="/tmp/myScript"
remote=user#host
commands=$(</dev/stdin)
cat <(echo "$commands") | ssh $remote "cat > $tempScript && chmod +x $tempScript" &&
ssh -t $remote $tempScript
errorCode=$?
ssh $remote << RM
if [[ -f $tempScript ]]; then
rm $tmpScript
fi
RM
exit $errorCode
It was there that I was redirecting stdin, not ssh. I should have mentioned this when I formulated my question. I read through that script over and over again, but I guess I just overlooked that one line. Removing that line totally fixed my problem.
Just to clarify, changing my script to the following totally fixed my problem.
#!/bin/bash
tempScript="/tmp/myScript"
remote=user#host
commands="$#"
cat <(echo "$commands") | ssh $remote "cat > $tempScript && chmod +x $tempScript" &&
ssh -t $remote $tempScript
errorCode=$?
ssh $remote << RM
if [[ -f $tempScript ]]; then
rm $tmpScript
fi
RM
exit $errorCode
Once I changed my wrapper script, my test script described in the question worked! I was able to print "hello" to the screen, vim appeared and I was able to use it like normal, and then once I quit vim "goodbye" was printed and the ssh client closed.
The commenters to the question were pointing me in the right direction the whole time. I'm sorry I only told part of my story.
I've searched for solutions to this problem several times in the past, however never finding a fully satisfactory one. Piping into ssh looses your interactivity. Two connects (scp/ssh) is slower, and your temporary file might be left lying around. And the whole script on the command line often ends up in escaping hell.
Recently I encountered that the command line buffer size is usually quite large (getconf ARG_MAX > 2MB where I looked). And this got me thinking about how I could use this and mitigate the escaping issue.
The result is:
ssh -t <host> /bin/bash "<(echo "$(cat my_script | base64 | tr -d "\n")" | base64 --decode)" <arg1> ...
or using a here document and cat:
ssh -t <host> /bin/bash $'<(cat<<_ | base64 --decode\n'$(cat my_script | base64)$'\n_\n)' <arg1> ...
I've expanded on this idea to produce a fully working BASH example script sshx that can run arbitrary scripts (not just BASH), where arguments can be local input files too, over ssh. See here.

bash show output only during runtime

I am trying to write a script that displays its output to the terminal only while it's running, much like the 'less' or 'ssh' commands.
When I launch said script, it would take over the whole terminal, print what it needs to print, and then when I exit the script, I would return to my terminal where the only record that my script has run will be the line that shows the command itself. When I scroll up, I don't want to see what my script output.
[snoopdougg#machine /home/snoopdougg/logs]$ ls
alog.log blog.log clog.log myScript.sh
[snoopdougg#machine /home/snoopdougg/logs]$ whoami
snoopdougg
[snoopdougg#machine /home/snoopdougg/logs]$ ./myScript.sh
[snoopdougg#machine /home/snoopdougg/logs]$
(Like nothing ever happened... but myScript.sh would have print things to the terminal while it was running).
How can I do this?
You're talking about the alternate screen, which you can access with a pair of terminal attributes, smcup and rmcup. Put this in a script and run it for a small demo:
tput smcup
echo hello
sleep 5
tput rmcup
Use screen:
screen ./myScript.sh

Problems in sending input to detached 'screen' via readreg and paste

I am trying to send input to a interactive command running via screen. Here is my initial command
screen -L -c ./customrc -S psql -d -m /opt/PostgreSQL/9.0/bin/psql
The above command will run interactive psql in screen detach mode. The customrc is used to define a log file for the output (which I will read from another process by polling)
I am using following two commands to send input to psql running in screen
screen -S psql -X readreg p psqlcommands.sql
screen -S psql -X paste p
The problem is that the above commands do not work unless I reattach screen at least once. Once I have attached screen and detached, the above commands work as expected. I have to launch these commands via background java process hence the interactive shell (bash) is not available. My goal is to run psql in interactive mode and pass input to it and capture its output via a log file.
So far I have tried to run screen via xterm (or konsole or gnome-terminal) in attach mode, use readreg/paste and then detach, but I realise that xterm will not be available in my production environment. I've also tried sending output to /proc//fd/0 but I am unable to emulate 'ENTER' from keyboard (I have to attach and press in order for the output to be accepted by psql). I think pipes and fifo may help but I am unable to figure out how to proceed with them using screen and psql.
I appreciate any further hints or workarounds.
Thank you,
Usman.
Well, you can use
screen -S psql -p 0 -X stuff $'\n'
or better (works for me)
screen -S mname -p 0 -X stuff `echo -ne '\015'`
-p 0 is needed to select the window.
Have you tried this to "press enter" after your readreg and paste?
screen -S psql -X stuff $'\n'
FINAL ANSWER: It is a bug/feature in 'GNU screen' that it needs a DISPLAY atleast once for 'paste' command to work. Following are possible workarounds this problem:
Finally figured out how to utilise psql with pipes and screen. Here is the solution:
mkfifo psql.pipe
screen -L -c ./customrc -S psql -d -m bash -i -c "while (true); do cat psql.pipe; done | /opt/PostgreSQL/9.0/bin/psql -a"
After that, I can cat my commands to the pipe:
cat ./mycommands.sql > psql.pipe
To quit from screen and terminating psql, I used
screen -S psql -X quit
EDIT: (finally) figured out the solution for my problem without using screen command. Meet 'empty' utility.
empty -f -i psql.in -o psql.o -p psql.pid <psqlpath>
This allows psql to run in full interactive mode as opposed to the original solution that I used (in which psql does not run in interactive mode).
Thanks.
Usman
I had this same problem. My workaround was to launch screen attached but pass it a screenrc file where the last command is "detach"
So this is my screenrc
#change the hardstatus settings to give an window list at the bottom of the
#Set this first otherwise messes with bash profile
hardstatus alwayslastline
#screen, with the time and date and with the current window highlighted
#hardstatus string '%{= kG}%-Lw%{= kW}%50> %n%f* %t%{= kG}%+Lw%< %{= kG}%-=%c:%s%{-}'
hardstatus string '%{= kG}[ %{G}%H %{g}][%= %{= kw}%?%-Lw%?%{r}(%{W}%n*%f%t%?(%u)%?%{r})%{w}%?%+Lw%?%?%= %{g}][%{B} %m-%d %{W}%c %{g}]'
#set scrollback
defscrollback 4096
#detach
detach
Hope this helps
P

Resources