I am using the below command and i am getting an error [Errno 2] No such file or directory: b'cd'
ansible servers -a "cd /etc/ansible"
I know that we can make use of playbooks but I just don't want to create, is there any possibility of executing the above command? please let me know.
Thank you.
According your question I understand that you like to execute ad-hoc commands on remote Managed Nodes, using an inventory like
[servers]
test1.example.com
test2.example.com
[test]
test1.example.com
test2.example.com
The Ansible Engine can be installed on one Control Node only to make connections to the remote Managed Nodes to manage them. Therefore a command like
user#ansible.example.com:~$ ansible test --user ${REMOTE_ACCOUNT} --ask-pass -m shell --args 'cd /etc/ansible'
SSH password:
test1.example.com | FAILED | rc=1 >>
/bin/sh: line 0: cd: /etc/ansible: No such file or directory non-zero return code
test2.example.com.example.com | FAILED | rc=1 >>
/bin/sh: line 0: cd: /etc/ansible: No such file or directory non-zero return code
may fail since Ansible usually isn't and don't need be installed at all on the remote Managed Nodes. You may test your connection and execution in general via
user#ansible.example.com:~$ ansible test --user ${REMOTE_ACCOUNT} --ask-pass -m shell --args 'cd /tmp'
SSH password:
test1.example.com | CHANGED | rc=0 >>
test2.example.com | CHANGED | rc=0 >>
Because of your mentioned Error Code 2 ([Errno 2], RC2) you may also have a look into whats the difference between Ansible raw, shell and command or difference between shell and command in Ansible.
I'm trying to execute a shell script that modifies some files in my source code as part of a build pipeline. The build runs on a private linux agent.
So I'm using the shell script task (I've also tried an inline bash task), my yaml looks like this:
- task: ShellScript#2
inputs:
scriptPath: analytics/set-base-image.bash
args: $(analyticsBaseImage)
failOnStandardError: true
And set-base-image.bash:
#!/bin/bash
sudo mkdir testDir
sudo sed -i -e "s/##branchBaseImagePlaceholder##/$1/g" Dockerfile-shiny
sudo sed -i -e "s/##branchBaseImagePlaceholder##/$1/g" Dockerfile-plumber
But nothing happens. I get debug output that looks like this:
##[debug]/bin/bash arg: /datadrive/agent1/_work/1/s/analytics/set-base-image.bash
##[debug]args=analytics-base
##[debug]/bin/bash arg: analytics-base
##[debug]failOnStandardError=true
##[debug]exec tool: /bin/bash
##[debug]Arguments:
##[debug] /datadrive/agent1/_work/1/s/analytics/set-base-image.bash
##[debug] analytics-base
[command]/bin/bash /datadrive/agent1/_work/1/s/analytics/set-base-image.bash analytics-base
/datadrive/agent1/_work/1/s/analytics
##[debug]rc:0
##[debug]success:true
##[debug]task result: Succeeded
##[debug]Processed: ##vso[task.complete result=Succeeded;]Bash exited with return code: 0
testDir isn't created and the files aren't modified.
The script runs fine if I log onto the agent machine and run it there (after running chmod +x on the script file).
I've also tried an inline Bash task instead of a shell task (what the difference is isn't obvious anyway).
If I add commands to the script that don't require any privileges, like echo and pwd, these run fine, and I see the results in the debug. But mkdir and sed don't.
Added a Jenkins project with a job. Added a file called test.sh with a simple code:
#!/bin/bash
docker run --rm -v $(pwd):/app image_name src/test.py
Jenkins supposed to run test.sh, create a docker container, run src/test.py and remove container:
builders:
- shell: |
./test.sh
However getting an error
./test.sh
python: can't open file 'src/test.py': [Errno 2] No such file or directory
Build step 'Execute shell' marked build as failure
$ ssh-agent -k
unset SSH_AUTH_SOCK;
unset SSH_AGENT_PID;
echo Agent pid 1781 killed;
However `src and src/test.py do exist (checked it).
So what causes the error?
I have an Ant script to be called from Jenkins that - after other deployment tasks - start a JBoss server. The deployment package already contains an startup script which wraps up the JBoss run script:
[...]/bin/run.sh -b ip -c config >/dev/null 2>&1 &
The startup script runs fine when manually executed (i.e ssh to the server and sudo ./startup.sh)
Now I'm having trouble invoking this startup script from sshexec task. The task can execute the startup script and JBoss does gets spun up but will terminate as soon as the task return and move on to the next task - similar to running the run.sh directly and closing the session.
My task is pretty standard
<sshexec host="host" username="username" password="password"
command="echo password | sudo -S sh ${JBOSS_HOME}/server/config/startup.sh" />
I'm confused. Shouldn't the startup script above covered starting up JBoss separately from the session already? Any idea how to solve this?
The remote system is Redhat 6.
Never mind, I found it. Still need to combine nohup and background running with the startup script. Plus the "dirty workaround" from here
https://unix.stackexchange.com/questions/91065/nohup-sudo-does-not-prompt-for-passwd-and-does-nothing (was actually brilliant)
End result:
echo password | sudo -S env && sudo sh -c 'nohup startup.sh > /dev/null 2>&1 &'
During deployment to AWS Elastic Beanstalk I want to do two things:
Take a file with cron entries and place this file in /etc/cron.d/
Change the file permissions on shell scripts contained in a single directory so they can be executed by the cron
In my .ebextensions folder I have the following:
container_commands:
00fix_script_permissions:
command: "chmod u+x /var/app/current/scripts/*"
01setup_cron:
command: "cat .ebextensions/propel_mis_crons.txt > /etc/cron.d/propel_mis_crons && chmod 644 /etc/cron.d/propel_mis_crons"
leader_only: true
And the propel_mis_crons.txt in the .ebextensions folder has:
# m h dom mon dow command
MAILTO="dev#23sparks.com"
* * * * * root /var/app/current/scripts/current_time.sh
I've checked the deploy logs and I can see the following:
2013-08-09 14:27:13,633 [DEBUG] Running command 00fix_permissions_dirs
2013-08-09 14:27:13,633 [DEBUG] Generating defaults for command 00fix_permissions_dirs
<<<
2013-08-09 14:27:13,736 [DEBUG] No test for command 00fix_permissions_dirs
2013-08-09 14:27:13,752 [INFO] Command 00fix_permissions_dirs succeeded
2013-08-09 14:27:13,753 [DEBUG] Command 00fix_permissions_dirs output:
2013-08-09 14:27:13,753 [DEBUG] Running command 01setup_cron
2013-08-09 14:27:13,753 [DEBUG] Generating defaults for command 01setup_cron
<<<
2013-08-09 14:27:13,829 [DEBUG] Running test for command 01setup_cron
2013-08-09 14:27:13,846 [DEBUG] Test command output:
2013-08-09 14:27:13,847 [DEBUG] Test for command 01setup_cron passed
2013-08-09 14:27:13,871 [INFO] Command 01setup_cron succeeded
2013-08-09 14:27:13,872 [DEBUG] Command 01setup_cron output:
However on deployment the permissions on all files in the scripts directory are not set correctly and the cron does not run. I'm not sure if the cron not running isn't running because of the permissions issue or if there is something else preventing this. This is running on a PHP5.4 64-bit Amazon Linux instance.
Would appreciate some assistance on this. It's quite possible that over time new shell scripts to be triggered by a cron will be added.
container_commands:
00_fix_script_permissions:
command: "chmod u+x /opt/python/ondeck/app/scripts/*"
I'm a Linux and AWS Noob, however I found that modifying your command as above did succeed for my use case.
/opt/python/current/app/scripts/createadmin now has execute permission for the user
#Ed seems to be correct where he suggests chmod'ing the ondeck file as opposed to the current one.
Additionally, this is how I setup my cron jobs through the elastic beanstalk .config file. It certainly might not be the best way, but it works for my application.
`"files":{
"/home/ec2-user/cronjobs.txt":{
"mode":"000777",
"owner":"ec2-user",
"group":"ec2-user",
"source":"https://s3.amazonaws.com/xxxxxxxxxx/cronjobs.txt"
}
}
"container_commands":{
"01-setupcron":{
"command": "crontab /home/ec2-user/cronjobs.txt -u ec2-user",
"leader_only": true
},`
First, I pull in a cronjobs text file and save it in the ec2-user folder. Next, in the container_commands I apply that file to the crontab.
Again, I'm no expert, but that's the best I could come up with and it has worked pretty well for us.