During deployment to AWS Elastic Beanstalk I want to do two things:
Take a file with cron entries and place this file in /etc/cron.d/
Change the file permissions on shell scripts contained in a single directory so they can be executed by the cron
In my .ebextensions folder I have the following:
container_commands:
00fix_script_permissions:
command: "chmod u+x /var/app/current/scripts/*"
01setup_cron:
command: "cat .ebextensions/propel_mis_crons.txt > /etc/cron.d/propel_mis_crons && chmod 644 /etc/cron.d/propel_mis_crons"
leader_only: true
And the propel_mis_crons.txt in the .ebextensions folder has:
# m h dom mon dow command
MAILTO="dev#23sparks.com"
* * * * * root /var/app/current/scripts/current_time.sh
I've checked the deploy logs and I can see the following:
2013-08-09 14:27:13,633 [DEBUG] Running command 00fix_permissions_dirs
2013-08-09 14:27:13,633 [DEBUG] Generating defaults for command 00fix_permissions_dirs
<<<
2013-08-09 14:27:13,736 [DEBUG] No test for command 00fix_permissions_dirs
2013-08-09 14:27:13,752 [INFO] Command 00fix_permissions_dirs succeeded
2013-08-09 14:27:13,753 [DEBUG] Command 00fix_permissions_dirs output:
2013-08-09 14:27:13,753 [DEBUG] Running command 01setup_cron
2013-08-09 14:27:13,753 [DEBUG] Generating defaults for command 01setup_cron
<<<
2013-08-09 14:27:13,829 [DEBUG] Running test for command 01setup_cron
2013-08-09 14:27:13,846 [DEBUG] Test command output:
2013-08-09 14:27:13,847 [DEBUG] Test for command 01setup_cron passed
2013-08-09 14:27:13,871 [INFO] Command 01setup_cron succeeded
2013-08-09 14:27:13,872 [DEBUG] Command 01setup_cron output:
However on deployment the permissions on all files in the scripts directory are not set correctly and the cron does not run. I'm not sure if the cron not running isn't running because of the permissions issue or if there is something else preventing this. This is running on a PHP5.4 64-bit Amazon Linux instance.
Would appreciate some assistance on this. It's quite possible that over time new shell scripts to be triggered by a cron will be added.
container_commands:
00_fix_script_permissions:
command: "chmod u+x /opt/python/ondeck/app/scripts/*"
I'm a Linux and AWS Noob, however I found that modifying your command as above did succeed for my use case.
/opt/python/current/app/scripts/createadmin now has execute permission for the user
#Ed seems to be correct where he suggests chmod'ing the ondeck file as opposed to the current one.
Additionally, this is how I setup my cron jobs through the elastic beanstalk .config file. It certainly might not be the best way, but it works for my application.
`"files":{
"/home/ec2-user/cronjobs.txt":{
"mode":"000777",
"owner":"ec2-user",
"group":"ec2-user",
"source":"https://s3.amazonaws.com/xxxxxxxxxx/cronjobs.txt"
}
}
"container_commands":{
"01-setupcron":{
"command": "crontab /home/ec2-user/cronjobs.txt -u ec2-user",
"leader_only": true
},`
First, I pull in a cronjobs text file and save it in the ec2-user folder. Next, in the container_commands I apply that file to the crontab.
Again, I'm no expert, but that's the best I could come up with and it has worked pretty well for us.
Related
I'm trying to execute a shell script that modifies some files in my source code as part of a build pipeline. The build runs on a private linux agent.
So I'm using the shell script task (I've also tried an inline bash task), my yaml looks like this:
- task: ShellScript#2
inputs:
scriptPath: analytics/set-base-image.bash
args: $(analyticsBaseImage)
failOnStandardError: true
And set-base-image.bash:
#!/bin/bash
sudo mkdir testDir
sudo sed -i -e "s/##branchBaseImagePlaceholder##/$1/g" Dockerfile-shiny
sudo sed -i -e "s/##branchBaseImagePlaceholder##/$1/g" Dockerfile-plumber
But nothing happens. I get debug output that looks like this:
##[debug]/bin/bash arg: /datadrive/agent1/_work/1/s/analytics/set-base-image.bash
##[debug]args=analytics-base
##[debug]/bin/bash arg: analytics-base
##[debug]failOnStandardError=true
##[debug]exec tool: /bin/bash
##[debug]Arguments:
##[debug] /datadrive/agent1/_work/1/s/analytics/set-base-image.bash
##[debug] analytics-base
[command]/bin/bash /datadrive/agent1/_work/1/s/analytics/set-base-image.bash analytics-base
/datadrive/agent1/_work/1/s/analytics
##[debug]rc:0
##[debug]success:true
##[debug]task result: Succeeded
##[debug]Processed: ##vso[task.complete result=Succeeded;]Bash exited with return code: 0
testDir isn't created and the files aren't modified.
The script runs fine if I log onto the agent machine and run it there (after running chmod +x on the script file).
I've also tried an inline Bash task instead of a shell task (what the difference is isn't obvious anyway).
If I add commands to the script that don't require any privileges, like echo and pwd, these run fine, and I see the results in the debug. But mkdir and sed don't.
I can't get to execute symfony command in bash script when I run it in cron.
When I execute the .sh script by hand everything is working fine.
in my bash file the command is executed like this:
/usr/bin/php -q /var/www/pww24/bin/console pww24:import asari $office > /dev/null
I run the scripts from root, the cron is set to root as well. For the test i set files permissions to 777 and added +x for execution.
the bash script executes fine. It acts like it's skipping the command but from logs i can see that the code is executed
It turned out that symfony system variables that I have stored on server are not enough. When you start to execute the command from command line its fine, but when using Cron you need them in .env file. Turned out that in the proces of countinous integrations I only got .env.dist file and I've to make the .env file anyways.
Additionaly I've added two lines to cron:
PATH=~/bin:/usr/bin/:/bin
SHELL=/bin/bash
and run my command like this from the bash file:
sudo /usr/bin/php -q /var/www/pww24/bin/console pww24:import asari $office > /dev/null
I have an Ant script to be called from Jenkins that - after other deployment tasks - start a JBoss server. The deployment package already contains an startup script which wraps up the JBoss run script:
[...]/bin/run.sh -b ip -c config >/dev/null 2>&1 &
The startup script runs fine when manually executed (i.e ssh to the server and sudo ./startup.sh)
Now I'm having trouble invoking this startup script from sshexec task. The task can execute the startup script and JBoss does gets spun up but will terminate as soon as the task return and move on to the next task - similar to running the run.sh directly and closing the session.
My task is pretty standard
<sshexec host="host" username="username" password="password"
command="echo password | sudo -S sh ${JBOSS_HOME}/server/config/startup.sh" />
I'm confused. Shouldn't the startup script above covered starting up JBoss separately from the session already? Any idea how to solve this?
The remote system is Redhat 6.
Never mind, I found it. Still need to combine nohup and background running with the startup script. Plus the "dirty workaround" from here
https://unix.stackexchange.com/questions/91065/nohup-sudo-does-not-prompt-for-passwd-and-does-nothing (was actually brilliant)
End result:
echo password | sudo -S env && sudo sh -c 'nohup startup.sh > /dev/null 2>&1 &'
I created the following file in .extensions/site_cron.config
container_commands:
01_remove_old_cron_jobs:
command: "crontab -r || exit 0"
02_cronjobs:
command: "cat .ebextensions/crontab | crontab"
leader_only: true
My .ebextensions/crontab is:
0 23 * * * php /var/www/html/index.php cron publish_new
I took a snapshop of my logs from Beanstalk and see this:
2014-05-24 14:41:46,743 [DEBUG] Running command 01_remove_old_cron_jobs
2014-05-24 14:41:46,744 [DEBUG] Generating defaults for command 01_remove_old_cron_jobs
<<<
2014-05-24 14:41:46,857 [DEBUG] No test for command 01_remove_old_cron_jobs
2014-05-24 14:41:46,877 [INFO] Command 01_remove_old_cron_jobs succeeded
2014-05-24 14:41:46,877 [DEBUG] Command 01_remove_old_cron_jobs output:
2014-05-24 14:41:46,878 [DEBUG] Running command 02_cronjobs
2014-05-24 14:41:46,878 [DEBUG] Generating defaults for command 02_cronjobs
<<<
2014-05-24 14:41:46,989 [DEBUG] Running test for command 02_cronjobs
2014-05-24 14:41:47,005 [DEBUG] Test command output:
2014-05-24 14:41:47,006 [DEBUG] Test for command 02_cronjobs passed
2014-05-24 14:41:47,028 [INFO] Command 02_cronjobs succeeded
2014-05-24 14:41:47,029 [DEBUG] Command 02_cronjobs output:
2014-05-24 14:41:47,029 [DEBUG] No services specified
Seems like everything went through OK and the instanced is in a green state. However, I do not see anything in crontab -e or in /var/www/html/.ebextensions -- what gives? How can I know that my cron jobs are in place and are ready to go?
I've been able to find my cron jobs.
I had to check with: sudo crontab -l
Previously I was using crontab -l
Not exactly sure why it show for sudo, but not for logged in user. I figured I was already root.
Anyways, since it in showing in that crontab, I am going to assume that they will run as expected.
For some reason if you install the jobs in the root crontab they don't run.
They have to be in the ec2-user crontab or they don't execute.
So I have created a user named build on my machine (RHEL). This user is the one that can execute a python script I made for the backend of a web application. Maven is installed for this user.
The python script is calling an .sh script which contains a mvn clean install command.
When I'm executing my python script from command line, everything is working just fine, but now I tried to automate it using a crontab and the maven command won't execute (but the rest of the .sh script is working, as I'm echoing sentences before and after the command).
Here is the content of my crontab -u build -e:
*/5 * * * * cd /product/************/**********/src/ ; ./buildEngine.py
Looks like it behaves as if maven wasn't installed, is there something I'm missing?
Thanks
Probably the environment isn't being set up properly for your user when cron is running your .sh script. Try adding a PATH=$PATH:<path to mvn> at the start of the script to see if it can find Maven then. Also see the INVOCATION section of man bash for details on how a shell is initialized.