I am having a problem related to a Zabbix action.
I have a trigger that fires when my /mnt volume is full.
Then I have an action for that trigger that runs an Ansible playbook that will remove an image located at /mnt.
I tested the Ansible playbook on my local machine and it does the job.
Before I had a shell script to remove the image and that worked too.
But now it executes ansible-playbook /mnt/myplatform/playbook.yml and shows that the command was executed in the Zabbix log, but nothing happens.
Before sudo rm /mnt/test.img worked.
Now ansible-playbook /mnt/myplatform/playbook.yml doesn't work.
Related
I am encountering a problem where I am running the same tasks on 2 remote nodes and the directories that those commands are executed at are different.
If I run pwd through Ansible on each remote host before this command, they return different paths. For example /usr and /usr/src. If I log into the remote host manually I go to /usr/src for both (As specified in their configuration files).
Can anyone explain to me why is this happening? To what directory does Ansible go if you run a command without specifying a chdir?
I would expect this difference to happen because, when login in manually, you have a .bashrc that cd you in the right folder in one of those two hosts, when Ansible does not source the .bashrc file.
Per default, ssh, and, so, Ansible, logs you into the $HOME folder of the user you define Ansible to connect with, which you can also find in /etc/passwd
Another reason I could see for this to happen would be because you use one user to log into the node but then become another one.
inventory.yml
all:
hosts:
some.example.com:
ansible_user: some_user
playbook.yml
---
- hosts: all
tasks:
- command: pwd # still you will be in /home/some_user
become: yes
become_user: some_other_user
I'm playing with Ansible (still learning), but I encountered a problem I can't think of a solution.
I'm trying to install and launch Tomcat on a remote server using Ansible.
The installation is working, but the last step which is the activation of the Tomcat server is failing.
If I manually launch the startup.sh script (as su -), using the following command : bash /opt/tomcat/startup.sh, I can see the tomcat homepage.
Using the ansible playbook I wrote, even though Ansible doesn't show up any errors, I can't see the tomcat homepage.
Here is the task I'm running :
- name: Launch Tomcat
command: bash /opt/tomcat/startup.sh
become: true
I tried to add become_user: root and become_method: sudo with no success.
I think it may be related to how become: true is handled by ansible but I'm not sure.
Have you also tried using the shell-module instead of the command-module?
With the Command module the command will be executed without being proceeded through a shell. As a consequence some variables like $HOME are not available. And also stream operations like <, >, | and & will not work.
The Shell module runs a command through a shell, by default /bin/sh. This can be changed with the option executable. Piping and redirection are here therefor available.
(Source: https://blog.confirm.ch/ansible-modules-shell-vs-command/)
There might be a problem with the environment. "sudo su" is different from "su -" where
-, -l, --login Provide an environment similar to what the user would expect had the user logged in directly.
Try shell (because it allows pipes, redirection, logical operations, ...) without become: true
shell: su - && bash /opt/tomcat/startup.sh
Make sure remote_user is the same whom the su - command works fine for.
i had the same problem while working with startup.sh in ansible script.i got to know the tomcat server process got starts but immediately shutdown as well.
so the solution to the problem is running or starting the tomcat server in nohup
thru ansible
Here is the sample script.
cat start.yml
---
- name: Playbook to stop server
#hosts: localhost
hosts: webserver
tasks:
- name: Start the server tomcat from UI
shell:
nohup /home/tomcat/bin/catalina.sh start >> /home/tomcat/somelog
I am writing an Ansible playbook to automate a series of sudo commands on various hosts. When I execute these commands individually in puTTY, I have no permission problems, as I have been granted proper access. However, when I attempt to create a playbook to do the same thing, I am told
user is not allowed to execute ... on host_name
For example, if I do $ sudo ls /root/, I have no problem, and, once I enter my password, can see the contents of /root/
In the case of my Ansible playbook ...
---
- host: servers
tasks:
- name: ls /root/
shell: ls /root/
become: true
become_method: sudo
...I then get the error mentioned above.
Any ideas why this would be the case? It seems to be telling me I don't have permission to run a command that I otherwise could run in an individual puTTY terminal.
[ ] automate a series of sudo commands on various hosts. When I execute these commands individually [ ]
Any ideas why this would be the case?
Sounds like you configured specific commands in the sudoers file (unfortunately you did not provide enough details, fortunately you asked for "ideas" not the real cause).
Ansible shell module does not run the command you specify prepended with sudo - it runs the whole shell session with sudo, so the command doesn't match what you configured in sudoers.
Either allow all commands to be run with elevated privileges for the Ansible user, or use raw module instead of shell.
I have a tomcat.sh with its intended purpose to restart (stop and start) the tomcat after it has been deployed onto the remote host.
I noticed that the Shell and Command modules is not executing the .sh file. However, I am able to execute the .sh file manually on the remote host as the the remote user.
The Playbook tasks are listed below:
Shell
- name: ensures tomcat is restarted
shell:
"nohup {{tomcat_dir}}/apache-tomcat-{{tomcat_version}}/tomcat.sh &"
Command
- name: ensures tomcat is restarted test-test
command: ./tomcat.sh
args:
chdir: "{{tomcat_dir}}/apache-tomcat-{{tomcat_version}}"
I was having the same problem. I have my simple script that is supposed to start a java program. shell and command module work very erratically. That is sometimes the java program is started successfully, and sometimes nothing happens. Even though ansible rc status shows as 0 (which is successful exit code ). I even put a echo "Hello" >> output.log as the first line in my script, to check if the script is actually picked for running. But, even that does not get printed.But, no errors whatsoever is printed and ansible module exit status (rc) is 0.Also, be sure to look at the stderr too. Sometimes, even though rc could be 0, but there might be some info in stderr
After lots of hair tearing, I could manage to fix my issue. I was running the java program as "sudo".I removed the sudo out of my script, and put it within playbook as the "become" directive - http://docs.ansible.com/ansible/become.html . This directive is available from ansible 2.0 onwards only, so I had to upgrade my ansible from 1.5 to 2.0. My playbook finally looked like this
- name: Execute run.sh
become: true
script: /home/vagrant/deploy/target/scripts/run.sh
The script looks like this:
nohup java -Djava.net.preferIPv4Stack=true -Xms1048m -Xmx1048m -cp /x/y/z/1.jar com.github.binitabharati.x.Main >> output.log 2>&1 &
Also, notice that I have not used the command or shell module, instead I used script module.
Does the value of {{tomcat_dir}} start with a "/"? If not then it will try to execute the command using the specified path relative to the home dir of whichever user ansible using to ssh in to the remote host.
incidentally, If you installed via package manager, this might be easier?
- name: restart tomcat
become: yes
become_user: sudo
service: name=tomcat state=restarted
More detail on the "service" module here
Of course, the service name may be different, like tomcat6 or tomcat7 or whatever, but you get the gist, I hope.
Lastly, if you installed tomcat via ansible using a role from galaxy or github, your role may have handlers that you could copy to get this done.
I would like to run my ansible playbook against a remote test machine, but as way of testing I'd like to verify between each step that what I expected to be done was done.
I'd like to add, more or less, a "pause" task after every task command, but without actually putting it into my yaml script. Does ansible have any sort of 'debug' mode that would allow for this?
I'm using ansible 1.5, but am open to answers that use features in newer versions.
Yes, ansible has a "step" mode, which will make it to pause before every task and wait for user confirmation to execute the task.
Simply call your playbook with the step flag:
ansible-playbook ... --step
start-at-task
To gain time, you can use --start-at-task to execute only the last comands which are probably those who are bugging. But for that you have to name your task :
This shell task has no name
- shell: vagrant provision; vagrant up;
args:
chdir: /vm/vagrant
This one does :
- name: start vagrant
shell: vagrant provision; vagrant up;
args:
chdir: /vm/vagrant
then run :
ansible-playbook playbook.yml --start-at-task="start vagrant"
tags
Another helpful tip is to use tags. For exemple you want to try only one command
- shell: vagrant provision; vagrant up;
args:
chdir: /linux/{{item.name}}
tags: [shell, debug]
Now you can debug this one doing :
ansible-playbook playbook.yml --tags="debug"
And it will start only tasks that received the tag debug.
Verbose
And if you want more informations, you can ask Ansible to be more verbose using -v, -vv, -vvv or -vvvvv
ansible-playbook -vvvv playbook.yml --tags="debug"
This will tell you all it can on the specified task
I do not think ansible provides a feature like that. One way to do this is put a pause between plays and make it conditional. When you execute the playbook, define a variable which decides whether to pause or not.
- pause:
when: PAUSE is defined
When you execute the playbook, don't define PAUSE if you don't want to pause. But if you want to pause between plays, then define it.
ansible-playbook -v .... --extra-vars "PAUSE=yes" ... myplay.yml