bashdb: Can I examine the data flowing through a pipe? - bash

I'm trying to debug a bash script that involves a command of the form:
VAR=$(cmd1|cmd2|cmd3)
I can debug it in bashdb, using the s command, which does something like this:
bashdb(2): s
2: VAR=$(cmd1|cmd2|cmd3)
cmd1
bashdb(3): s
2: VAR=$(cmd1|cmd2|cmd3)
cmd2
i.e. it allows me to run the commands in the pipe one by one. Logic indicates that it must therefore store the contents of the pipe somewhere, so that it can feed it into the next command when I type s again. How do I get bashdb to show this data?

Try tee.
VAR=$(cmd1|tee cmd1.out|cmd2|tee cmd2.out|cmd3|tee cmd3.out)

Related

ITerm: Is there a way to reprint output of previous command without running it?

Of course, we can feed the output of any command to a file. Using command > /tmp/filename
Or even better use command | tee /tmp/filename to have the standard output be fed onto the terminal as well as the file name.
However, If I just executed command is there a way for ITerm to reprint the output that command already fed to console without re-running the command (example use case: command is not idempotent and I want to grep something without having to touch the mouse)
You could use the script command, which records your input + the output your commands generate.
To use it, just run script at the beginning, before you start any execution, and this will throw you in a new shell. which gets recorded in a file called typescript in your HOME folder.
Once you are done, you can exit, and then have all of the input + output in that typescript log file.

Output of complete script to variable

I have rather complex bash script which is normally run manually and thus needs live output on the console (stdout and stderr).
However, since the outcome and output of this script is rather important I'd like to save its output at the end into a database.
I have already a trap function for this and the database query as such is also not a problem. The problem is: How do I get the output of the entire script till that point into a variable?
The live console output should be preserved. The output after the database query (if any) does not matter.
Is this possible at all? Might it be necessary to wrap the script into another script (file)?
I'm doing similar task like this
exec 5>&1
message=$(check|tee /dev/fd/5)
mutt -s "$subjct" "$mailto" <<< "$message"
Add your script instead of check function and change mailing to db query.

When data is piped from one program via | is there a way to detect what that program was from the second program?

Say you have a shell command like
cat file1 | ./my_script
Is there any way from inside the 'my_script' command to detect the command run first as the pipe input (in the above example cat file1)?
I've been digging into it and so far I've not found any possibilities.
I've been unable to find any environment variables set in the process space of the second command recording the full command line, the command data the my_script commands sees (via /proc etc) is just _./my_script_ and doesn't include any information about it being run as part of a pipe. Checking the process list from inside the second command even doesn't seem to provide any data since the first process seems to exit before the second starts.
The best information I've been able to find suggests in bash in some cases you can get the exit codes of processes in the pipe via PIPESTATUS, unfortunately nothing similar seems to be present for the name of commands/files in the pipe. My research seems to be saying it's impossible to do in a generic manner (I can't control how people decide to run my_script so I can't force 3rd party pipe replacement tools to be used over build in shell pipes) but it just at the same time doesn't seem like it should be impossible since the shell has the full command line present as the command is run.
(update adding in later information following on from comments below)
I am on Linux.
I've investigated the /proc/$$/fd data and it almost does the job. If the first command doesn't exit for several seconds while piping data to the second command can you read /proc/$$/fd/0 to see the value pipe:[PIPEID] that it symlinks to. That can then be used to search through the rest of the /proc//fd/ data for other running processes to find another process with a pipe open using the same PIPEID which gives you the first process pid.
However in most real world tests I've done of piping you can't trust that the first command will stay running long enough for the second one to have time to locate it's pipe fd in /proc before it exits (which removes the proc data preventing it being read). So if this method will return any information is something I can't rely on.

Abort bash script, if a certain console output string appears

I'm using a bash script to automatically run a simulation program. This program periodically prints the current status of the simulation in the console, like "Iteration step 42 ended normally".
Is it possible to abort the script, if the console output is something like "warning: parameter xyz outside range of validity"?
And what can I do, if the console output is piped to a text file?
Sorry if this sounds stupid, I'm new to this :-)
Thanks in advance
This isn't an ideal job for Bash. However, you can certainly capture and test STDOUT inside a Bash iteration loop using an admixture of conditionals, grep-like tools, and command substitution.
On the other hand, if Bash isn't doing the looping (e.g. it's just waiting for an external command to finish) then you need to use something like expect. Expect is purpose-built to monitor output streams for regular expressions, and perform branching based on expression matches.

Switch from file contents to STDIN in piped command? (Linux Shell)

I have a program (that I did not write) which is not designed to read in commands from a file. Entering commands on STDIN is pretty tedious, so I'd like to be able to automate it by writing the commands in a file for re-use. Trouble is, if the program hits EOF, it loops infinitely trying to read in the next command dropping an endless torrent of menu options on the screen.
What I'd like to be able to do is cat a file containing the commands into the program via a pipe, then use some sort of shell magic to have it switch from the file to STDIN when it hits the file's EOF.
Note: I've already considered using cat with the '-' for STDIN. Unfortunately (I didn't know this before), piped commands wait for the first program's output to terminate before starting the second program -- they do not run in parallel. If there's some way to get the programs to run in parallel with that kind of piping action, that would work!
Any thoughts? Thanks for any assistance!
EDIT:
I should note that my goal is not only to prevent the system from hitting the end of the commands file. I would like to be able to continue typing in commands from the keyboard when the file hits EOF.
I would do something like
(cat your_file_with_commands; cat) | sh your_script
That way, when the file with commands is done, the second cat will feed your script with whatever you type on stdin afterwards.
Same as Idelic answer with more simple syntax ;)
cat your_file_with_commands - | sh your_script
I would think expect would work for this.
Have you tried using something like tail -f commandfile | command I think that should pipe the lines of the file to command without closing the file descriptor afterwards. Use -n to specify the number of lines to be piped if tail -f doesn't catch all of them.

Resources