bash: wait for specific command output before continuing - bash

I know there are several posts asking similar things, but none address the problem I'm having.
I'm working on a script that handles connections to different Bluetooth low energy devices, reads from some of their handles using gatttool and dynamically creates a .json file with those values.
The problem I'm having is that gatttool commands take a while to execute (and are not always successful in connecting to the devices due to device is busy or similar messages). These "errors" translate not only in wrong data to fill the .json file but they also allow lines of the script to continue writing to the file (e.g. adding extra } or similar). An example of the commands I'm using would be the following:
sudo gatttool -l high -b <MAC_ADDRESS> --char-read -a <#handle>
How can I approach this in a way that I can wait for a certain output? In this case, the ideal output when you --char-read using gatttool would be:
Characteristic value/description: some_hexadecimal_data`
This way I can make sure I am following the script line by line instead of having these "jumps".

grep allows you to filter the output of gatttool for the data you are looking for.
If you are actually looking for a way to wait until a specific output is encountered before continuing, expect might be what you are looking for.
From the manpage:
expect [[-opts] pat1 body1] ... [-opts] patn [bodyn]
waits until one of the patterns matches the output of a spawned
process, a specified time period has passed, or an end-of-file is
seen. If the final body is empty, it may be omitted.

Related

"tail -F" equivalent in lftp

I'm currently looking for tips to simulate a tail -F in lftp.
The goal is to monitor a log file the same way I could do with a proper ssh connection.
The closest command I found for now is repeat cat logfile.
It works but that not the best when my file is too big cause it displays each time all the file.
The lftp program specifically will not support this, but if the server supports the extension, it is possible to pull only the last $x bytes from a file with, e.g. curl --range (see this serverfault answer). This, combined with some logic to only grab as many bytes as have been added since the last poll, could allow you to do this relatively efficiently. I doubt if there are any off-the-shelf FTP clients with this functionality, but someone else may know better.

Bash: how to duplicate input/output from interactive scripts only in complete lines?

How can I capture the input/ output from a script in realtime (such as with tee), but line-by-line instead of character-by-character? My goal is to capture the input typed into the interactive prompts of a script only after backspaces and auto-completion have finished processing (after the RETURN key is hit).
Specifically, I am trying to create a wrapper script for ssh that creates a timestamped log of commands used on remote servers. The script, which uses tee to redirect the output for filtering, works well, but the redirected output gets jumbled with unsubmitted characters whenever I use the backspace key or the up/down keys to scroll through my remote history. For example: service test stopexitservice test stopart or cd ..logs[1Pls -al.
Perhaps there is a way to capture the terminal's scrollback and redirect that like with tee?
Update: I have found a character-based cleanup solution that does what I want most of the time. However, I am still hoping for an answer to this question (which may well be msw's answer that it is very difficult to do).
In the Unix world there are two primary modes of handling keyboard input. These are known as 'raw' in which characters are passed from the terminal to the reading program one at a time. This is the mode that editors (and such) will use because the editor needs to respond immediately when you press a key.
The other terminal discipline is called 'cooked' which is the line by line behavior that you think of as the bash line by line input where you get to backspace and the command is not executed until you press return. Ssh has to take your input in raw, character-by-character mode because it has no idea what is running on the other side. For example, if you are running an editor on the far side, it can't wait for a return before sending the key-press. So, as some have suggested, grabbing shell history on the far side is the only reasonable way to get a command-by-command record of the bash commands you typed.
I oversimplified for clarity; actually most installations of bash take input in raw mode because they allow editor like command modification. For example, Ctrl-P scrolls up the command history or Ctrl-A goes to the beginning of the line. And bash needs to be able to get those keys the moment they are typed not waiting for a return.
This is another reason that capturing on the local side is obnoxiously difficult: if you capture on the local side, the stream will be filled with Backspaces and all of bash's editing commands. To get a true transcript of what the remote shell actually executed you have to parse the character stream as if you were the remote shell. There also a problem if you run something like
vi /some_file/which_is_on_the_remote/machine
the input stream to the local ssh will be filled with movement commands snippets of text including backspaces and so on and it would be bloody difficult to figure out what is part of a bash command and what is you talking to the editor.
Few things involving computers are impossible; getting clean input from the local side of an ssh invocation is really, really hard.
I question the actual utility of recording the commands that you execute on a local or remote machine. The reason is that there is so much state which is not visible from a command log. As a simple example here's a log of two commands:
17:00$ cp important_file important_file.bak
17:15$ rm important_file
and two days later you are trying to figure out whether important_file.bak should have the contents you intended or not. Given that log you can't answer that simple question. Even if you had the sequence
16:58$ cat important_file
17:00$ cp important_file important_file.bak
17:15$ rm important_file
If you aren't capturing the output, the cat in the log will not tell you anything. Give me almost any command sequence and I can envision a scenario in which it will not give you the information you need to make sense of what was done.
For a very similar purpose I use GNU screen which offer the option to record everything you do in a shell session (INPUT/OUTPUT). The log it creates also comes with undesirable characters but I clean them with perl:
perl -ne 's/\x1b[[()=][;?0-9]*[0-9A-Za-z]?//g;s/\r//g;s/\007//g;print' < screenlog.0
I hope this helps.
Some features of screen:
http://speaking-my-language.blogspot.com/2010/09/top-5-underused-gnu-screen-features.html
Site I found the perl-oneliner:
https://superuser.com/questions/99128/removing-the-escape-characters-from-gnu-screens-screenlog-n

Emulating 'named' process substitutions

Let's say I have a big gzipped file data.txt.gz, but often the ungzipped version needs to be given to a program. Of course, instead of creating a standalone unpacked data.txt, one could use the process substitution syntax:
./program <(zcat data.txt.gz)
However, depending on the situation, this can be tiresome and error-prone.
Is there a way to emulate a named process substitution? That is, to create a pseudo-file data.txt that would 'unfold' into a process substitution zcat data.txt.gz whenever it is accessed. Not unlike a symbolic link forwards a read operation to another file, but, in this case, it needs to be a temporary named pipe.
Thanks.
PS. Somewhat similar question
Edit (from comments) The actual use-case is having a large gzipped corpus that, besides its usage in its raw form, also sometimes needs to be processed with a series of lightweight operations (tokenized, lowercased, etc.) and then fed to some "heavier" code. Storing a preprocessed copy wastes disk space and repeated retyping the full preprocessing pipeline can introduce errors. In the same time, running the pipeline on-the-fly incurs a tiny computational overhead, hence the idea of a long-lived pseudo-file that hides the details under the hood.
As far as I know, what you are describing does not exist, although it's an intriguing idea. It would require kernel support so that opening the file would actually run an arbitrary command or script instead.
Your best bet is to just save the long command to a shell function or script to reduce the difficulty of invoking the process substitution.
There's a spectrum of options, depending on what you need and how much effort you're willing to put in.
If you need a single-use file, you can just use mkfifo to create the file, start up a redirection of your archive into the fifo, and and pass the fifo's filename to whoever needs to read from it.
If you need to repeatedly access the file (perhaps simultaneously), you can set up a socket using netcat that serves the decompressed file over and over.
With "traditional netcat" this is as simple as while true; do nc -l -p 1234 -c "zcat myfile.tar.gz"; done. With BSD netcat it's a little more annoying:
# Make a dummy FIFO
mkfifo foo
# Use the FIFO to track new connections
while true; do cat foo | zcat myfile.tar.gz | nc -l 127.0.0.1 1234 > foo; done
Anyway once the server (or file based domain socket) is up, you just do nc localhost 1234 to read the decompressed file. You can of course use nc localhost 1234 as part of a process substitution somewhere else.
It looks like this in action (image probably best viewed in separate tab):
Depending on your needs, you may want to make the bash script more sophisticated for caching etc, or just dump this thing and go for a regular web server in some scripting language you're comfortable with.
Finally, and this is probably the most "exotic" solution, you can write a FUSE filesystem that presents virtual files backed by whatever logic your heart desires. At this point you should probably have a good hard think about whether the maintainability and complexity costs of where you're going really offset someone having to call zcat a few extra times.

2-way communication with background process (I/O)

I have a program that runs in the command line (i.e. $ run program starts up a prompt) that runs mathematical calculations. It has it's own prompt that takes in text input and responds back through standard-out/error (or creates a separate x-window if needed, but this can be disabled). Sometimes I would like to send it small input, and other times I send in a large text file filled with a series of input on each line. This program takes a lot of resources and also has a large startup time, so it would be best to only have one instance of it running at a time. I could keep open the program-prompt and supply the input this way, or I can send the process with an exit command (to leave prompt) which just prints the output. The problem with sending the request with an exit command is that the program must startup each time (slow ...). Furthermore, the output of this program is sometimes cryptic and it would be helpful to filter the output in some way (eg. simplify output, apply ANSI colors, etc).
This all makes me want to put some 2-way IO filter (or is that "pipe"? or "wrapper"?) around the program so that the program can run in the background as single process. I would then communicate with it without having to restart. I would also like to have this all while filtering the output to be more user friendly. I have been looking all over for ideas and I am stumped at how to accomplish this in some simple shell accessible manor.
Some things I have tried were redirecting stdin and stdout to files, but the program hangs (doesn't quit) and only reads the file once making me unable to continue communication. I think this was because the prompt is waiting for some user input after the EOF. I thought that this could be setup as a local server, but I am uncertain how to begin accomplishing that.
I would love to find some simple way to accomplish this. Additionally, if you can think of a way to perform this, do you think there is a way to also allow for attaching or detaching to the prompt by request? Any help and ideas would be greatly appreciated.
You could create two named pipes (man mkfifo) and redirect input and output:
myprog < fifoin > fifoout
Then you could open new terminal windows and do this in one:
cat > fifoin
And this in the other:
cat < fifoout
(Or use tee to save the input/output as well.)
To dump a large input file into the program, use:
cat myfile > fifoin

create a rolling buffer in bash

I want to use curl to get a stream from a remote server, and write it to a buffer. So far so good I just do curl http://the.stream>/path/to/thebuffer. Thing is I don't want this file to get too large, so I want to be able to delete the first bytes of the file as I simultaneously add to the last bytes. Is there a way of doing this?
Alternatively if I could write n bytes to buffer1, then switch to buffer2, buffer3.. and when buffer x was reached delete buffer1 and start again - without losing the data coming in from curl (it's a live stream, so I can't stop curl). I've been reading up the man pages for curl and cat and read, but can't see anything promising.
There isn't any particularly easy way to do what you are seeking to do.
Probably the nearest approach creates a FIFO, and redirects the output of curl to the FIFO. You then have a program such as split or csplit reading the FIFO and writing to different files. If you decide that the split programs are not the tool, you may need to write your own variation on them. You can then decide how to process the files that are created, and when to remove them.
Note that curl will hang until there is a process reading from the FIFO. When the process reading the FIFO exits, curl will get either a SIGPIPE signal or a write error, either of which should stop it.

Resources