How can I list the SSH connection sequence on a remote computer? - bash

Suppose I am working on LocalA (username userA). I use ssh to tunnel to RemoteB (username userB), then to RemoteC (username userC), then finally to RemoteDestinationD (username userD).
It would look something like the following:
userA#LocalA$ ssh userB#RemoteB
userB#RemoteB$ ssh userC#RemoteC
userC#RemoteC$ ssh userD#RemoteDestinationD
userD#RemoteDestinationD$
Suppose I forget which series of connections I used to connect from LocalA to RemoteDestinationD. It could have been one jump (e.g. userA#LocalA$ ssh userD#RemoteDestinationD) or three jumps (as in my example above). How could I programmatically determine what my series of jumps was? ("Scroll through your history, dummy!" doesn't count....)
I'd like a BASH script that would output something like the following:
userA#LocalA$ --> userB#RemoteB
userB#RemoteB$ --> userC#RemoteC
userC#RemoteC$ --> userD#RemoteDestinationD
Any ideas?
Thanks in advance!

There is no good way to do this, I suspect. There are a bunch of true hacks you could do if you were willing, such as turning on X11 port forwarding and using different ports for counting.
Or, more obnoxiously:
sshcount=0
ssh userA#RemoteA "echo sshcount=$(($sshcount+1)) > .bashrc; bash"
And from there do something similar to get to B, so sshcount keeps incrementing.
Clearly this should be done via overwriting a .sshcountrc file or something, and source that from the .bashrc. And then, you might need to make the file name dependent on some munging of $SSH_TTY if you want to support doing parallel versions of this.
And the whole thing is really a hack so maybe you shouldn't anyway and you should solve the problem another way. Or remove the need for counting in the first place.

Related

How to properly write an interactive shell program which can exploit bash's autocompletion mechanism

(Please, help me adjust title and tags.)
When I run connmanctl I get a different prompt,
enrico:~$ connmanctl
connmanctl>
and different commands are available, like services, technologies, connect, ...
I'd like to know how this thing works.
I know that, in general, changing the prompt can be just a matter of changing the variable PS1. However this thing alone (read "the command connmanctl changes PS1 and returns) wouldn't have any effect at all on the functionalities of the commands line (I would still be in the same bash process).
Indeed, the fact that the available commands are changed, looks to me like the proof that connmanctl is running all the time the prompt is connmanctl>, and that, upon running connmanctl, a while loop is entered with a read statement in it, followed by a bunch of commands which process the the input.
In this latter scenario that I imagine, there's not even need to change PS1, as the connmanctl> line could simply be obtained by echo -n "connmanctl> ".
The reason behind this curiosity is that I'm trying to write a wrapper to connmanctl. I've already written it, and it works as intended, except that I don't know how to properly setup the autocompletion feature, and I think that in order to do so I first need to understand what is the right way to write an interactive shell script.

"tail -F" equivalent in lftp

I'm currently looking for tips to simulate a tail -F in lftp.
The goal is to monitor a log file the same way I could do with a proper ssh connection.
The closest command I found for now is repeat cat logfile.
It works but that not the best when my file is too big cause it displays each time all the file.
The lftp program specifically will not support this, but if the server supports the extension, it is possible to pull only the last $x bytes from a file with, e.g. curl --range (see this serverfault answer). This, combined with some logic to only grab as many bytes as have been added since the last poll, could allow you to do this relatively efficiently. I doubt if there are any off-the-shelf FTP clients with this functionality, but someone else may know better.

Bash: how to duplicate input/output from interactive scripts only in complete lines?

How can I capture the input/ output from a script in realtime (such as with tee), but line-by-line instead of character-by-character? My goal is to capture the input typed into the interactive prompts of a script only after backspaces and auto-completion have finished processing (after the RETURN key is hit).
Specifically, I am trying to create a wrapper script for ssh that creates a timestamped log of commands used on remote servers. The script, which uses tee to redirect the output for filtering, works well, but the redirected output gets jumbled with unsubmitted characters whenever I use the backspace key or the up/down keys to scroll through my remote history. For example: service test stopexitservice test stopart or cd ..logs[1Pls -al.
Perhaps there is a way to capture the terminal's scrollback and redirect that like with tee?
Update: I have found a character-based cleanup solution that does what I want most of the time. However, I am still hoping for an answer to this question (which may well be msw's answer that it is very difficult to do).
In the Unix world there are two primary modes of handling keyboard input. These are known as 'raw' in which characters are passed from the terminal to the reading program one at a time. This is the mode that editors (and such) will use because the editor needs to respond immediately when you press a key.
The other terminal discipline is called 'cooked' which is the line by line behavior that you think of as the bash line by line input where you get to backspace and the command is not executed until you press return. Ssh has to take your input in raw, character-by-character mode because it has no idea what is running on the other side. For example, if you are running an editor on the far side, it can't wait for a return before sending the key-press. So, as some have suggested, grabbing shell history on the far side is the only reasonable way to get a command-by-command record of the bash commands you typed.
I oversimplified for clarity; actually most installations of bash take input in raw mode because they allow editor like command modification. For example, Ctrl-P scrolls up the command history or Ctrl-A goes to the beginning of the line. And bash needs to be able to get those keys the moment they are typed not waiting for a return.
This is another reason that capturing on the local side is obnoxiously difficult: if you capture on the local side, the stream will be filled with Backspaces and all of bash's editing commands. To get a true transcript of what the remote shell actually executed you have to parse the character stream as if you were the remote shell. There also a problem if you run something like
vi /some_file/which_is_on_the_remote/machine
the input stream to the local ssh will be filled with movement commands snippets of text including backspaces and so on and it would be bloody difficult to figure out what is part of a bash command and what is you talking to the editor.
Few things involving computers are impossible; getting clean input from the local side of an ssh invocation is really, really hard.
I question the actual utility of recording the commands that you execute on a local or remote machine. The reason is that there is so much state which is not visible from a command log. As a simple example here's a log of two commands:
17:00$ cp important_file important_file.bak
17:15$ rm important_file
and two days later you are trying to figure out whether important_file.bak should have the contents you intended or not. Given that log you can't answer that simple question. Even if you had the sequence
16:58$ cat important_file
17:00$ cp important_file important_file.bak
17:15$ rm important_file
If you aren't capturing the output, the cat in the log will not tell you anything. Give me almost any command sequence and I can envision a scenario in which it will not give you the information you need to make sense of what was done.
For a very similar purpose I use GNU screen which offer the option to record everything you do in a shell session (INPUT/OUTPUT). The log it creates also comes with undesirable characters but I clean them with perl:
perl -ne 's/\x1b[[()=][;?0-9]*[0-9A-Za-z]?//g;s/\r//g;s/\007//g;print' < screenlog.0
I hope this helps.
Some features of screen:
http://speaking-my-language.blogspot.com/2010/09/top-5-underused-gnu-screen-features.html
Site I found the perl-oneliner:
https://superuser.com/questions/99128/removing-the-escape-characters-from-gnu-screens-screenlog-n

Interact with serial device with shell scripting

I have a serial usb device that is connected to a linux box and it works fine with serial communication programs, such as minicom.
For instance, within that program, I send the string "V" and I get back an answer: "UBW FW D Version 1.4.3".
Now, I'd like to do a shell script that could do the same, in order to test variables. I investigated the possibility to use minicom without being "interactive" but it seems is not possible. I also tried the obvious "echo V > /dev/ttyACM0" but had no luck as well.
Any idea of how can I send and receive strings to/from a serial device in such way I can use the received data in a shell script?
Thanks
In the olden days of modems, we would use the program 'expect' to send and receive data from the serial line. This doesn't exactly solve your problem, but might get you some of the way there.
Have a look at Use expect in bash script to provide password to SSH command
The atinout program does exactly what you are asking for. Example:
$ echo AT | atinout - /dev/ttyACM0 -
AT
OK
$
Now, from you example command and response, I see that your "modem" seems to able to configure or modify to not return the OK Final Result Code, and atinout absolutely needs that for its operation, so make sure the UBW behaves properly.

What does CreateFile("CONIN$" ..) do?

I was hacking away the source code for plink to make it compatible with unison.
If you don't know, unison is a file synchronization tool, it runs an "ssh" command to connect to a remote server, but there's no ssh.exe for windows; there's plink, which is very close but not close enough (it doesn't behave like unison expects it to), so people usually make wrappers around it, like this one.
one of the problems is that unison expects the password prompt to print to stderr (but plink prints it to stdout, and causes unison to be confused), so I thought, well, should be simple enough, hack my thru plink's code and make it print the prompt to stdout. so I hacked my way through and did that.
Next problem: I can't respond to the prompt!! no matter what I type, it has no effect.
the code for getting input is roughly like this:
hin = GetStdHandle(STD_INPUT_HANDLE);
....
r = ReadFile(hin, .....);
I'm not sure why it's done this way, but I'm not an expert in designing command line tools for windows, so what do I know! But I figure something is missing in setting up the input handle.
I looked at the source code for the above wrapper tool and I see this:
hconin=CreateFile("CONIN$",GENERIC_READ|GENERIC_WRITE,FILE_SHARE_READ,0,OPEN_EXISTING,0,0)
and I try it (just for the heck of it)
hin=CreateFile("CONIN$",GENERIC_READ|GENERIC_WRITE,FILE_SHARE_READ,0,OPEN_EXISTING,0,0);
....
r = ReadFile( hin ...... )
and surprisingly it works! I can now respond to the prompt!
Why is this? what is "CONIN$"? and why is it different from the STD_INPUT_HANDLE?
I can sort of "guess" that FILE_SHARE_READ and OPEN_EXISTING are playing a role in this (since ssh is being run from within another process), but I want to understand what's going on here, and make sure that this code doesn't have some unwanted side effects or security holes or something scary like that!
CONIN$ is the console input device. Normally, stdin is an open file handle to this, but if stdin is redirected for some reason, then using CONIN$ will allow you to get access to the console despite the redirection. Reference.

Resources