I want to add some extra logging, so I'd like bash to run "myevaluator cmdline" after expanding all the environment variables in cmdline, is that possible?
Update: basically I want to extend my bash history logging to include PID of the main process started by the command, and things from /proc/ tree.
For instance, if I run "java xyz" from bash command line, I want to log PID of the java process started by that command line.
Only way I can see to implement this would be to have "bash" call my custom evaluator giving it the final command-line, and then my evaluator would take care of starting the process and doing the logging
So the question is -- how do I get bash to call "myevaluator cmdline" whenever bash tries to execute an external process
Use set -x in your script ( or /bin/bash -x you_script.sh ) to print every line prepended with PS4 to stderr.
Related
I want to execute some kind of bash script in Robot Framework.
In terminal I use that command:
bash /home/Documents//script.sh --username=root --password=hello --host=100.100.100.100 --port=400 - --data='{"requestId":1,"parameters":{"name":"check","parameters":{"id":"myID"}}}'
and it works
In robot script I try with:
Running script
${result} = Run Process bash /home/Documents//script.sh "username\=root" "password\=hello" "host\=100.100.100.100" "port\=400" "data\='{"requestId":1,"parameters":{"name":"check","parameters":{"id":"myID"}}}'" shell=True stdout=stdout.txt
Log To Console ${result}
Log ${result}
Log ${result.stdout}
Log ${result.stderr}
But I get Missing required arguments: username, password, host, port.
Process doesn't recognise arguments.
How to pass script arguments in Robot Framework with Process Library?
Please show examples, I checked already doc in Process Library for Specifying command and arguments but I don't understand it.
After the night I found solution:
Running script
${result} = Run Process bash /home/Documents//script.sh username\=root password\=hello host\=100.100.100.100 port\=400 data\='{"requestId":1,"parameters":{"name":"check","parameters":{"id":"myID"}}}' shell=True stdout=stdout.txt
Options should be unquoted but = should be escaped with \
Writing a script to retrieve various environment parameters back from a list of servers. My script returns no value when ran but the same command returns the desired value outside of a script.
I have tried using a couple of variations to retrieve the same data. One of the commands fails because of restrictions placed on the accounts I have access to. The second command works but only if executed in an elevated mode.
This fails with access denied (pwdx is restricted)
dzdo pgrep -f /some/path | xargs pwdx
This works outside of a script but returns no value within a script
dzdo /bin/readlink -e /proc/"$(pgrep -f /some/path)"/cwd
When using "bash -x" to execute my scriipt, I see the "readlink" code is blank.
Ideally, I would like to return the PID and path of the process running as the "pgrep" command does. I can work with the path alone as returned by the "readlink" version returns. The end goal is to gather the information from several servers for audit purposes. (version, etc.)
Am I using the wrong syntax for the "readlink" command? I'm fairly new to coding bash scripts so I appreciate any guidance to help understand when to to what if I'm using a command in a script vs command line.
If pwdx is the restricted program, you need to run that with dzdo, not pgrep.
pgrep -f /some/path | dzdo xargs pwdx
As an example, I am trying to capture the raw commands that are output by the following script:
https://github.com/adampointer/go-deribit/blob/master/scripts/generate-models.sh
I have tried to following a previous answer:
BASH: echoing the last command run
but the output I am getting is as follows:
last command is gojson -forcefloats -name="${struct}" -tags=json,mapstructure -pkg=${p} >> models/${p}/${name%.*}_request.go
What I would like to do is capture the raw command, in other words have variables such as ${struct}, ${p} and ${p}/${name%.*} replaced by the actual values that were used.
How do I do this?
At the top of the script after the hashbang #!/usr/bin/env bash or #!/bin/bash (if there is any) add set -x
set -x Print commands and their arguments as they are executed
Run the script in debug mode which will trace all the commands in the script: https://stackoverflow.com/a/10107170/988525.
You can do that without editing the script by typing "bash generate-models.sh -x".
Looking for some basic help in shell programming.
Suppose we have a command known as foobar, then what is the effect of shell invocation
exec foobar
exec 2> /var/log/foobar.log
The first exec command should only be used in a script — not at a command line terminal. It replaces the shell with the program foobar, instead of running it as a separate child process. Any commands in the script after the exec foobar will not be executed (even if the shell fails to find foobar to execute); if it is an interactive terminal session, it will report the error and continue.
exec [-cl] [-a name] [command [arguments]]
If command is supplied, it replaces the shell without creating a new process. If the -l option is supplied, the shell places a dash at the beginning of the zeroth argument passed to command. This is what the login program does. The -c option causes command to be executed with an empty environment. If -a is supplied, the shell passes name as the zeroth argument to command. If command cannot be executed for some reason, a non-interactive shell exits, unless the execfail shell option is enabled. In that case, it returns failure. An interactive shell returns failure if the file cannot be executed.
The second exec (with I/O redirection but no command) changes things so that the standard error stream goes to the file /var/log/foobar.log. Any further error messages from the shell, or from commands executed by the shell, go to the log file (unless there's another lot of I/O redirection).
If no command is specified, redirections may be used to affect the current shell environment. If there are no redirection errors, the return status is zero; otherwise the return status is non-zero.
exec foobar
will replace your shell process with foobar. I do not think you mean exec 2>/var/log/foobar.log but rather exec foobar 2>/var/log/foobar.log. This will do the same with sending 2 i.e. standard error messages to specified log file. You can read man page here.
exec(1) command is similar to exec(3) call. It replaces the code segment of calling process from that of called program. 1 and 3 signify man page sections.
I have a bash script that performs several file operations. When any user runs this script, it executes successfully and outputs a few lines of text but when I try to cron it there are problems. It seems to run (I see an entry in cron log showing it was kicked off) but nothing happens, it doesn't output anything and doesn't do any of its file operations. It also doesn't appear in the running processes anywhere so it appears to be exiting out immediately.
After some troubleshooting I found that removing "set -e" resolved the issue, it now runs from the system cron without a problem. So it works, but I'd rather have set -e enabled so the script exits if there is an error. Does anyone know why "set -e" is causing my script to exit?
Thanks for the help,
Ryan
With set -e, the script will stop at the first command which gives a non-zero exit status. This does not necessarily mean that you will see an error message.
Here is an example, using the false command which does nothing but exit with an error status.
Without set -e:
$ cat test.sh
#!/bin/sh
false
echo Hello
$ ./test.sh
Hello
$
But the same script with set -e exits without printing anything:
$ cat test2.sh
#!/bin/sh
set -e
false
echo Hello
$ ./test2.sh
$
Based on your observations, it sounds like your script is failing for some reason (presumably related to the different environment, as Jim Lewis suggested) before it generates any output.
To debug, add set -x to the top of the script (as well as set -e) to show commands as they are executed.
When your script runs under cron, the environment variables and path may be set differently than when the script is run directly by a user. Perhaps that's why it behaves differently?
To test this: create a new script that does nothing but printenv and echo $PATH.
Run this script manually, saving the output, then run it as a cron job, saving that output.
Compare the two environments. I am sure you will find differences...an interactive
login shell will have had its environment set up by sourcing a ".login", ".bash_profile",
or similar script (depending on the user's shell). This generally will not happen in a
cron job, which is usually the reason for a cron job behaving differently from running
the same script in a login shell.
To fix this: At the top of the script, either explicitly set the environment variables
and PATH to match the interactive environment, or source the user's ".bash_profile",
".login", or other setup script, depending on which shell they're using.