I am running a bash script from the playbook. The bash script runs multiple scripts in parallel in-turn on a remote machine and it will give the output on console only when the entire playbook is executed. I want to print the output real-time. Is it possible?
Storing of result with 'register' provided by ansible to print output is not helping here. As I need real-time output.
It is not possible to take influence on a script "realtime" that is currently running.
U could use the failed_when to catch script errors:
https://docs.ansible.com/ansible/latest/user_guide/playbooks_error_handling.html#controlling-what-defines-failure
or u catch errors in the script self at runtime.
Related
I have a very generic playbook with no hard coded info whatsoever. Everything in the playbook is a variable and filled out by supplying extra vars, even host names for connections. There are no inventory files in use since the host this is run against is random usually.
On a command line in linux, I can run my ansible playbook multiple times with different variables passed and they will all run at the same time.
ansible-playbook cluster_check_build.yml -e {"host": "host1"...}
ansible-playbook cluster_check_build.yml -e {"host": "host2"...}
In tower however, if I create a job template and use the same playbook then things run serially. I call that job template multiple times using the API and passing the data as JSON. Each time I called the API to launch the job task I am supplying new extra_vars so the job task is running against different hosts. I see the jobs run serially and not parallel like from the command line.
I have 400+ hosts that need to have the same playbook run against them at random times. This playbook can take an hour or so to complete. There can be times where the playbook needs to run against 20 or 30 random hosts. Time is crucial and serial Job processing is a non starter.
Is it possible to run the same job template against different hosts in parallel? IF the answer is no then what are my options? Hopefully not creating 400+ job templates. That seems like it defeats the purpose of a very generic playbook.
I feel like an absolute fool. In the bottom right of my job template is a tiny check box that says "ENABLE CONCURRENT JOBS" <---this was the fix.
Yes, you can run templates/playbooks against multiple hosts in parallel in Tower/AWX.
These are the situations where your template will run serially:
"forks" set to 1 in your template
SERIAL=1 within your playbook
Your Tower/AWX instance is setup with only 1 fork
Your Instance is set with >1 forks but other jobs are running at the same time.
I came to this site for help in a task I couldn't find an answer for.
I need a script that reboots my computer is a string is not found in the output of a terminal program for, let's say, 30 seconds. That is, if the script doesn't find the name "joe" within 30 seconds, it will trigger the reboot command. Or if you want to look at it from another perspective, the script will reboot the machine unless the name "joe" is found in the output of terminal program within a given time period.
I have very little knowledge of bash scripting. Can anyone help?
Thanks in advance!
malandante
Use Expect
Use Expect rather than Bash to manage interactive scripts, or when you need to treat standard output as a stream.
#!/usr/bin/env expect
set timeout 30
spawn /path/to/script.sh
expect {
joe {}
# assumes current user has passwordless access to the reboot
# command as configured in /etc/sudoers
timeout { exec /usr/bin/sudo /sbin/reboot }
}
If you can reliably treat standard output from your script in a line-oriented fashion then you can use the Bash TMOUT variable to set a timeout on your read command, but there are a number of ways this can fail. Expect is really the right tool for the job.
There is an interactive shell console, I can get into it, run specific set of commands inside the console and exit from it.
Now I want to write a bash script that connects to an interactive shell console and runs my commands silently, exits at the end without any interaction. This means I want to have everything automated in a non-interactive way. Any ideas how can I achieve this?
I am trying something like, say, blabla shell is the interactive console here, it always bring me to the interactive mode :(
/usr/bin/blabla shell << EOF
do A,
do B,
do C
quit
EOF
I have a long/specific version of this question can be found here ->
Configure flume in shell/bash script - avoid interactive flume shell console
Closing stdin should do the trick:
exec <&-
The expect command if your friend. It can emulate interactive communication with other commands even in very sophisticated way.
From man expect:
Expect is a program that "talks" to other interactive programs according to a script.
You can try putting the commands you would input in the interactive prompt into a file, then run the command like:
command < file
Maybe the Secure SHell, ssh does what you need. It requires that the "remote" machine is configured as an SSH server. I use it regularly to run commands on other hosts, such as
ssh user#host command
How do I allow my Matlab script to pass back a return code to the Task Scheduler? I currently have a task that runs "matlab -r myscript". The problem is the Task Scheduler always succeeds immediately after starting, even though myscript takes several minutes to run. So, I don't see how to pass back an error code.
How can I make Task Scheduler wait until the script stops running and then get matlab to pass back a return code?
Use the matlab -wait command line option to have it block until the program is finished.
There appears to be an undocumented argument to quit() to set the exit status - e.g. quit(42) - which then shows up in %ERRORLEVEL%. Since it's undocumented, you might not want to rely on it. Alternatively, have your script write its status to a file and have a wrapper script parse it.
Currently, I have a driver program that runs several thousand instances of a "payload" program and does some post-processing of the output. The driver currently calls the payload program directly, using a shell() function, from multiple threads. The shell() function executes a command in the current working directory, blocks until the command is finished running, and returns the data that was sent to stdout by the command. This works well on a single multicore machine. I want to modify the driver to submit qsub jobs to a large compute cluster instead, for more parallelism.
Is there a way to make the qsub command output its results to stdout instead of a file and block until the job is finished? Basically, I want it to act as much like "normal" execution of a command as possible, so that I can parallelize to the cluster with as little modification of my driver program as possible.
Edit: I thought all the grid engines were pretty much standardized. If they're not and it matters, I'm using Torque.
You don't mention what queuing system you're using, but SGE supports the '-sync y' option to qsub which will cause it to block until the job completes or exits.
In TORQUE this is done using the -x and -I options. qsub -I specifies that it should be interactive and -x says run only the command specified. For example:
qsub -I -x myscript.sh
will not return until myscript.sh finishes execution.
In PBS you can use qsub -Wblock=true <command>