EC2 (Beanstalk) crontab not updating via .ebextensions - amazon-ec2

I created the following file in .extensions/site_cron.config
container_commands:
01_remove_old_cron_jobs:
command: "crontab -r || exit 0"
02_cronjobs:
command: "cat .ebextensions/crontab | crontab"
leader_only: true
My .ebextensions/crontab is:
0 23 * * * php /var/www/html/index.php cron publish_new
I took a snapshop of my logs from Beanstalk and see this:
2014-05-24 14:41:46,743 [DEBUG] Running command 01_remove_old_cron_jobs
2014-05-24 14:41:46,744 [DEBUG] Generating defaults for command 01_remove_old_cron_jobs
<<<
2014-05-24 14:41:46,857 [DEBUG] No test for command 01_remove_old_cron_jobs
2014-05-24 14:41:46,877 [INFO] Command 01_remove_old_cron_jobs succeeded
2014-05-24 14:41:46,877 [DEBUG] Command 01_remove_old_cron_jobs output:
2014-05-24 14:41:46,878 [DEBUG] Running command 02_cronjobs
2014-05-24 14:41:46,878 [DEBUG] Generating defaults for command 02_cronjobs
<<<
2014-05-24 14:41:46,989 [DEBUG] Running test for command 02_cronjobs
2014-05-24 14:41:47,005 [DEBUG] Test command output:
2014-05-24 14:41:47,006 [DEBUG] Test for command 02_cronjobs passed
2014-05-24 14:41:47,028 [INFO] Command 02_cronjobs succeeded
2014-05-24 14:41:47,029 [DEBUG] Command 02_cronjobs output:
2014-05-24 14:41:47,029 [DEBUG] No services specified
Seems like everything went through OK and the instanced is in a green state. However, I do not see anything in crontab -e or in /var/www/html/.ebextensions -- what gives? How can I know that my cron jobs are in place and are ready to go?

I've been able to find my cron jobs.
I had to check with: sudo crontab -l
Previously I was using crontab -l
Not exactly sure why it show for sudo, but not for logged in user. I figured I was already root.
Anyways, since it in showing in that crontab, I am going to assume that they will run as expected.

For some reason if you install the jobs in the root crontab they don't run.
They have to be in the ec2-user crontab or they don't execute.

Related

Cd command is not working in ansible when using in -a tag

I am using the below command and i am getting an error [Errno 2] No such file or directory: b'cd'
ansible servers -a "cd /etc/ansible"
I know that we can make use of playbooks but I just don't want to create, is there any possibility of executing the above command? please let me know.
Thank you.
According your question I understand that you like to execute ad-hoc commands on remote Managed Nodes, using an inventory like
[servers]
test1.example.com
test2.example.com
[test]
test1.example.com
test2.example.com
The Ansible Engine can be installed on one Control Node only to make connections to the remote Managed Nodes to manage them. Therefore a command like
user#ansible.example.com:~$ ansible test --user ${REMOTE_ACCOUNT} --ask-pass -m shell --args 'cd /etc/ansible'
SSH password:
test1.example.com | FAILED | rc=1 >>
/bin/sh: line 0: cd: /etc/ansible: No such file or directory non-zero return code
test2.example.com.example.com | FAILED | rc=1 >>
/bin/sh: line 0: cd: /etc/ansible: No such file or directory non-zero return code
may fail since Ansible usually isn't and don't need be installed at all on the remote Managed Nodes. You may test your connection and execution in general via
user#ansible.example.com:~$ ansible test --user ${REMOTE_ACCOUNT} --ask-pass -m shell --args 'cd /tmp'
SSH password:
test1.example.com | CHANGED | rc=0 >>
test2.example.com | CHANGED | rc=0 >>
Because of your mentioned Error Code 2 ([Errno 2], RC2) you may also have a look into whats the difference between Ansible raw, shell and command or difference between shell and command in Ansible.

Azure Pipelines Shell Script task not executing

I'm trying to execute a shell script that modifies some files in my source code as part of a build pipeline. The build runs on a private linux agent.
So I'm using the shell script task (I've also tried an inline bash task), my yaml looks like this:
- task: ShellScript#2
inputs:
scriptPath: analytics/set-base-image.bash
args: $(analyticsBaseImage)
failOnStandardError: true
And set-base-image.bash:
#!/bin/bash
sudo mkdir testDir
sudo sed -i -e "s/##branchBaseImagePlaceholder##/$1/g" Dockerfile-shiny
sudo sed -i -e "s/##branchBaseImagePlaceholder##/$1/g" Dockerfile-plumber
But nothing happens. I get debug output that looks like this:
##[debug]/bin/bash arg: /datadrive/agent1/_work/1/s/analytics/set-base-image.bash
##[debug]args=analytics-base
##[debug]/bin/bash arg: analytics-base
##[debug]failOnStandardError=true
##[debug]exec tool: /bin/bash
##[debug]Arguments:
##[debug] /datadrive/agent1/_work/1/s/analytics/set-base-image.bash
##[debug] analytics-base
[command]/bin/bash /datadrive/agent1/_work/1/s/analytics/set-base-image.bash analytics-base
/datadrive/agent1/_work/1/s/analytics
##[debug]rc:0
##[debug]success:true
##[debug]task result: Succeeded
##[debug]Processed: ##vso[task.complete result=Succeeded;]Bash exited with return code: 0
testDir isn't created and the files aren't modified.
The script runs fine if I log onto the agent machine and run it there (after running chmod +x on the script file).
I've also tried an inline Bash task instead of a shell task (what the difference is isn't obvious anyway).
If I add commands to the script that don't require any privileges, like echo and pwd, these run fine, and I see the results in the debug. But mkdir and sed don't.

Jenkins unable to run Python file in Docker - file not found

Added a Jenkins project with a job. Added a file called test.sh with a simple code:
#!/bin/bash
docker run --rm -v $(pwd):/app image_name src/test.py
Jenkins supposed to run test.sh, create a docker container, run src/test.py and remove container:
builders:
- shell: |
./test.sh
However getting an error
./test.sh
python: can't open file 'src/test.py': [Errno 2] No such file or directory
Build step 'Execute shell' marked build as failure
$ ssh-agent -k
unset SSH_AUTH_SOCK;
unset SSH_AGENT_PID;
echo Agent pid 1781 killed;
However `src and src/test.py do exist (checked it).
So what causes the error?

Ant sshexec task unable to execute remote script file separate from session

I have an Ant script to be called from Jenkins that - after other deployment tasks - start a JBoss server. The deployment package already contains an startup script which wraps up the JBoss run script:
[...]/bin/run.sh -b ip -c config >/dev/null 2>&1 &
The startup script runs fine when manually executed (i.e ssh to the server and sudo ./startup.sh)
Now I'm having trouble invoking this startup script from sshexec task. The task can execute the startup script and JBoss does gets spun up but will terminate as soon as the task return and move on to the next task - similar to running the run.sh directly and closing the session.
My task is pretty standard
<sshexec host="host" username="username" password="password"
command="echo password | sudo -S sh ${JBOSS_HOME}/server/config/startup.sh" />
I'm confused. Shouldn't the startup script above covered starting up JBoss separately from the session already? Any idea how to solve this?
The remote system is Redhat 6.
Never mind, I found it. Still need to combine nohup and background running with the startup script. Plus the "dirty workaround" from here
https://unix.stackexchange.com/questions/91065/nohup-sudo-does-not-prompt-for-passwd-and-does-nothing (was actually brilliant)
End result:
echo password | sudo -S env && sudo sh -c 'nohup startup.sh > /dev/null 2>&1 &'

Elastic Beanstalk cron and deployment permissions

During deployment to AWS Elastic Beanstalk I want to do two things:
Take a file with cron entries and place this file in /etc/cron.d/
Change the file permissions on shell scripts contained in a single directory so they can be executed by the cron
In my .ebextensions folder I have the following:
container_commands:
00fix_script_permissions:
command: "chmod u+x /var/app/current/scripts/*"
01setup_cron:
command: "cat .ebextensions/propel_mis_crons.txt > /etc/cron.d/propel_mis_crons && chmod 644 /etc/cron.d/propel_mis_crons"
leader_only: true
And the propel_mis_crons.txt in the .ebextensions folder has:
# m h dom mon dow command
MAILTO="dev#23sparks.com"
* * * * * root /var/app/current/scripts/current_time.sh
I've checked the deploy logs and I can see the following:
2013-08-09 14:27:13,633 [DEBUG] Running command 00fix_permissions_dirs
2013-08-09 14:27:13,633 [DEBUG] Generating defaults for command 00fix_permissions_dirs
<<<
2013-08-09 14:27:13,736 [DEBUG] No test for command 00fix_permissions_dirs
2013-08-09 14:27:13,752 [INFO] Command 00fix_permissions_dirs succeeded
2013-08-09 14:27:13,753 [DEBUG] Command 00fix_permissions_dirs output:
2013-08-09 14:27:13,753 [DEBUG] Running command 01setup_cron
2013-08-09 14:27:13,753 [DEBUG] Generating defaults for command 01setup_cron
<<<
2013-08-09 14:27:13,829 [DEBUG] Running test for command 01setup_cron
2013-08-09 14:27:13,846 [DEBUG] Test command output:
2013-08-09 14:27:13,847 [DEBUG] Test for command 01setup_cron passed
2013-08-09 14:27:13,871 [INFO] Command 01setup_cron succeeded
2013-08-09 14:27:13,872 [DEBUG] Command 01setup_cron output:
However on deployment the permissions on all files in the scripts directory are not set correctly and the cron does not run. I'm not sure if the cron not running isn't running because of the permissions issue or if there is something else preventing this. This is running on a PHP5.4 64-bit Amazon Linux instance.
Would appreciate some assistance on this. It's quite possible that over time new shell scripts to be triggered by a cron will be added.
container_commands:
00_fix_script_permissions:
command: "chmod u+x /opt/python/ondeck/app/scripts/*"
I'm a Linux and AWS Noob, however I found that modifying your command as above did succeed for my use case.
/opt/python/current/app/scripts/createadmin now has execute permission for the user
#Ed seems to be correct where he suggests chmod'ing the ondeck file as opposed to the current one.
Additionally, this is how I setup my cron jobs through the elastic beanstalk .config file. It certainly might not be the best way, but it works for my application.
`"files":{
"/home/ec2-user/cronjobs.txt":{
"mode":"000777",
"owner":"ec2-user",
"group":"ec2-user",
"source":"https://s3.amazonaws.com/xxxxxxxxxx/cronjobs.txt"
}
}
"container_commands":{
"01-setupcron":{
"command": "crontab /home/ec2-user/cronjobs.txt -u ec2-user",
"leader_only": true
},`
First, I pull in a cronjobs text file and save it in the ec2-user folder. Next, in the container_commands I apply that file to the crontab.
Again, I'm no expert, but that's the best I could come up with and it has worked pretty well for us.

Resources