How do I setup and use Laravel Scheduling on AWS Elastic Beanstalk? - laravel

Scenario
As a fairly new user of Laravel and Elastic Beanstalk I soon found my self in the need to schedule operations, like most of us do.
In the past I had always used simple crontab scheduling for this. So now I stood before a list of questions:
How do I run Laravel code using crontab?
How do I setup crontab in my Elastic Beanstalk environment?
Finding the individual answers to these questions weren't that hard. Combining them and actually getting it all to work however turned out to be a bit tricky, which is why I've decided to share the solution here for others struggling with getting this to work properly.
Environment
Laravel 5.6
PHP 7.1

TL;DR:
See working .ebextentions configuration at the end of answer.
Environment
Laravel 5.6
PHP 7.1
How to I run Laravel code using crontab?
The answers to this question is of course the most obvious and if you're even the slightest in to Laravel you surely know the answer: Scheduling!
I won't bore you with explaining the brilliant thing that is Laravel Scheduling since you can read about it in the documentation yourself.
But the key thing we need to take with us is that Laravel Scheduling uses crontab to execute, as described in the documentation:
* * * * * php /path-to-your-project/artisan schedule:run >> /dev/null 2>&1
Which brings us to the next, and a bit more tricky, question...
How do I setup crontab in my Elastic Beanstalk environment?
At first glance the answer to this question may seem pretty straight forward. I found this in the AWS Knownledge Center: How do I create a cron job on EC2 instances in an Elastic Beanstalk environment?
Here they describe how to setup a cron job on your Elastic Beanstalk EC2 machine using .ebextentions. In short what it does is creating a new file in the directory /etc/cron.d/ in which we put our desired cron job.
Files in this directory is then processed by crontab as the root user. There are some of the traps I walked in to, as I've commented below:
files:
# The name of the file should not contain any dot (.) or dash (-), this can
# cause the script not to run. Underscore (_) is OK.
"/etc/cron.d/mycron":
# This permissions is important so that root user can run the script.
mode: "000644"
# As the file is run by the root user it needs to be the owner of the file.
owner: root
# For consistency it's a good idea to have root as the group aswell.
group: root
# NOTE: We need to explicitly tell the cron job to be run as the root user!
content: |
* * * * * root /usr/local/bin/myscript.sh
# There need to be a new line after the actual cron job in the file.
Once we have stayed clear all of those traps, it's time to put in our Laravel Scheduling cron job from above. That should look something like this:
files:
"/etc/cron.d/schedule_run":
mode: "000644"
owner: root
group: root
content: |
* * * * * root php /path-to-your-project/artisan schedule:run >> /dev/null 2>&1
This won't really work in most cases though. That's because the Laravel Scheduler won't have access to your ENV variables and must noticeably not your database settings.
I found the answer to this here: How to Get Laravel Task Scheduling Working on AWS Elastic Beanstalk Cron
So a big shout out to George Bönnisch; I salute you sir for sharing this!
So with this last piece of the puzzle I was finally able to get the setup to work properly:
Working Solution
File structure:
[Project root]
|-- .ebextensions
| |-- cronjob.config
cronjob.config:
files:
"/etc/cron.d/schedule_run":
mode: "000644"
owner: root
group: root
content: |
* * * * * root . /opt/elasticbeanstalk/support/envvars && /usr/bin/php /var/www/html/artisan schedule:run 1>> /dev/null 2>&1
commands:
remove_old_cron:
command: "rm -f /etc/cron.d/*.bak"
Tip when using Laravel Scheduling on AWS Elastic Beanstalk!
Since one of the key features of Elastic Beanstalk is that it can autoscale and add more servers when needed, you might want to have a look on the new feature in Laravel Scheduling: Running Tasks On One Server.
In many cases you don't want your cron job to be executed on more than just one server. For example if you have a scheduled command for sending emails you don't want those sent multiple times.
NOTE: This requires that you use memcached or redis as your cache engine, as stated in the documentation. If you don't, have a look at the AWS service Elasticache.
NOTE 2: When using onOneServer() you must give the scheduled task a name using the name() method (before calling onOneServer()). Like so:
$schedule->command('my:task')
->name('my:task')
->daily()
->onOneServer();

A simpler approach is to use the new Periodic Tasks feature. Using the .ebextensions for cron jobs may lead to multiple machines running the same job or other race conditions with auto-scaling.
Jobs defined in cron.yaml are loaded only by the Worker environment and are guaranteed to run only by one machine at a time (the leader). It has a nice syncing mechanism to make sure there's no duplication. From the docs:
Elastic Beanstalk uses leader election to determine which instance in your worker environment queues the periodic task. Each instance attempts to become leader by writing to an Amazon DynamoDB table. The first instance that succeeds is the leader, and must continue to write to the table to maintain leader status. If the leader goes out of service, another instance quickly takes its place.
Creating a Cron for a Single or Multiple Workers
Place cron.yaml in the root of the project:
version: 1
cron:
- name: "schedule"
url: "/worker/schedule"
schedule: "* * * * *"
One thing to take into consideration is that in Beanstalk periodic tasks are designed to make an HTTP POST request to a URL in your application that in turn triggers the job you want to run. This is similar to how it also manages queues with SQS.
For Laravel
For Laravel specifically, you may create the routes and controllers to handle each scheduled job. But a better approach is to use Laravel's scheduler and have a single route that you call every minute.
This package will create those routes automatically for you https://github.com/dusterio/laravel-aws-worker
Troubleshooting Permissions
If you are running into trouble with the DynamoDB create Leader Table permissions when triggering a deploy from CodePipeline, it's because the CodePileline service role needs dynamodb:CreateTable. For instructions check these StackOverflow Question
Official Elastic Beanstalk Periodic Tasks Docs

you can use this with amazon Linux 2
using .ebextensions configurations you can run the commands directly
first you need to configure the command in separate file
create file under .ebextensions called cron_job.txt and add this line
* * * * * root . /opt/elasticbeanstalk/deployment/env && /usr/bin/php /var/www/html/artisan schedule:run 1>> /var/www/html/laralog.log 2>&1
notice that the first part different from amazon Linux 2 from amazon Linux 1
it loads the environment variables:
Linux 1 : . /opt/elasticbeanstalk/support/envvars
Linux 2 : . /opt/elasticbeanstalk/deployment/env
and after initialing the command in this separated file
we need to fire it through the init.config file which have the container commands in .ebextensions
we can define it as follow :
container_commands:
03cronjob:
command: 'cat .ebextensions/cron_jobs.txt > /etc/cron.d/cron_jobs && chmod 644 /etc/cron.d/cron_jobs'
and that's it you can try it and find the cron jobs executed successfully.
and you can also read this explained article
https://medium.com/qosoor/the-ultimate-guide-to-setup-cron-jobs-with-laravel-elastic-beanstalk-d497daaca1b0
Hope this is helpful

In AWS ECS we can use this without adding cron in to the container
https://github.com/spatie/laravel-cronless-schedule
This is how you can start the cronless schedule:
php artisan schedule:run-cronless

This is for docker users, had some trouble with this so thought its worth posting.
The cron needs to be added to schedule_run file on the server. However, even if you add a container_name to the Dockerrun.aws.json file it changes it to that plus some extra information and therefore you cannot use the normal service name to run the cron.
So using $(docker ps -qf name=php-fpm) where name is part of the name of your container it will return the ID of the container. My container is called php-fpm.
Here is my working file (.ebextensions/01-cron.config).
files:
"/etc/cron.d/schedule_run":
mode: "000644"
owner: root
group: root
content: |
* * * * * root docker exec -t $(docker ps -qf name=php-fpm) sh -c "php artisan schedule:run" >> /var/log/eb-cron.log 2>&1
commands:
002-remove_old_cron:
command: "rm -f /etc/cron.d/*.bak"
Note: it mite be that the first time this cron runs the container is not up. Since the cron in my example is running each minute it didnt matter too much as by the time it runs the 2nd time, the container is up and working.

After several efforts, I found an alternative way to run a cron job easily. You can run a cron job easily in 3 Steps.
Step 1: Create a route
routes/web.php
Route::get('/cron/run',[HomeController::class, 'cron'])->name('cron');
Step 2: Create a function in the HomeController
public function cron()
{
\Artisan::call("schedule:run");
return 'run cron successful';
}
Step 3:
Run the URL every minute using https://cron-job.org/en/

The cron is not triggered because the location of the env file in the old solutions is wrong. Actually, there is nothing wrong with anything. Below is the command you can use currently. We use it in all projects.
Create a cronjob.config in the .ebextensions folder. Then put these to the inside of the file.
files:
"/etc/cron.d/schedule_run":
mode: "000644"
owner: root
group: root
content: |
* * * * * root . /opt/elasticbeanstalk/deployment/env && /usr/bin/php /var/www/html/artisan schedule:run 1>> /var/www/html/storage/logs/laravel_cron.log 2>&1
commands:
remove_old_cron:
command: "rm -f /etc/cron.d/*.bak"

The accepted answer (of #Niklas) unfortunately didn't work for me.
There's even a more comprehensive explanation of that answer here:
https://medium.com/qosoor/the-ultimate-guide-to-setup-cron-jobs-with-laravel-elastic-beanstalk-d497daaca1b0
But as I said, that didn't work for me.
What worked for me is simpler:
(.ebextensions/cron.config)
files:
"/etc/cron.d/mycron":
mode: "000644"
owner: root
group: root
content: |
* * * * * root /usr/local/bin/myscript.sh
"/usr/local/bin/myscript.sh":
mode: "000755"
owner: root
group: root
content: |
#!/bin/bash
date > /tmp/date
cd /var/www/html/ && php artisan schedule:run >> /dev/null 2>&1
exit 0
commands:
remove_old_cron:
command: "rm -f /etc/cron.d/mycron.bak"
I simply copied the command from Laravel documentation and put it into the config file given by AWS documentation.

Related

Is it save to put a command which includes a sudo command on the schedluer?

Good morning,
I am working on a link between my Laravel file server and a sinology backup. The command I am using uses a sudo command to create and then disconnect the link. I want to know if I would be able to run this command from the scheduler.
Thanks
You can use for example (to run at midnight, every day):
0 0 * * * /path/to/your/command
This is a record you can add in cron of the user you use to run the command. Be aware cron have different environment so you should set all the variables you need.
You may need to create special shell script to include there your environment variables:
. ~/.bash_profile
/path/to/your/command

Clear all cronjobs via ansible playbook

I have been making cron jobs like this
- name: Add in cronjob to run every 30 mins to update do cool things
cron:
name="Run cool.py"
minute="30"
job="cd /home/www/ && . env/bin/activate && python /home/www/cool.py >/dev/null 2>&1"
user=cooluser
Now I'd like to know, how to delete all the cronjobs in my ubuntu box. Since I've made a few while testing and want to start fresh
There are multiple ways. If you want to do it "correctly", you set state: absent on all cronjobs and run the playbook:
- name: Add in cronjob to run every 30 mins to update do cool things
cron:
name: "Run cool.py"
user: cooluser
state: absent
You need to set them for each cronjob you created, they are matched by name. Check the docs of the cron module.
If you just want to remove all the crontabs for a certain user, you can ssh to that box and run crontab -r -u cooluser. This will remove all jobs for cooluser, but leave everything in /etc/ untouched. But ansible does not add anything there.
If you want to do that, but use ansible, you can use the command module:
- name: remove all cronjobs for user cooluser
command: crontab -r -u cooluser
The first one is the one you should use in a production playbook. The other two can be used, if you just want to do a one-time cleanup to start fresh.
You can also check crontab -l -u cooluser to see all currently available cronjobs and crontab -e -u cooluser to edit them.

how to run a logrotate hourly

I created a program.conf that logrotate my logs hourly in an EC2 instance. the logrotate works well when i force it command (by sudo logrotate program.conf --verbose --force) but it doesn't run each hour.
I tried several solutions by googling this problem like puting my program.conf in /etc/logrotate.d and moving logrotate from cron.dail into cron.hourly.
but it doesn't work.
Here is my program.conf :
/home/user_i/*.log{
hourly
missingok
dateext
rotate 1
compress
size 100M
sharedscripts
postrotate
/usr/bin/bash file.sh
endscript
}
Have you any idea please ?
Thanks
OP states in a comment that they can't use crontab and requires a solution utilizing /etc/cron.hourly.
Take the "program.conf" file you're using to define the logrotate parameters and put that file somewhere accessible by the root user but NOT in the /etc/logrotate.d/ directory. The idea is that if we're running this hourly in our own fashion, we don't want logrotate to also perform this rotation when it normally runs. This file only needs to be readable by root, not executable.
You need to make sure that ALL of the logrotate parameters you need are inside this file. We are going to be using logrotate to execute a rotation using only this configuration file, but that also means that any of the 'global' parameters you defined in /etc/logrotate.conf are not going to be taken into account. If you have any parameters in there that need to be respected by your custom rotation, they need to be duplicated into your "program.conf" file.
Inside your /etc/cron.hourly folder, create a new file (executable by root) that will be the script executing our custom rotation every hour (adjust your shell/shebang accordingly):
#!/usr/bin/bash
logrotate -f /some/dir/program.conf
This will make cron fire off an hourly, forced rotation for that configuration file without having any effect on logrotate's normal functionality.
On Debian Bullseye (and maybe other modern systemd based systems) logrotate is handled by a systemd timer which runs once a day. To change the run frequency of logroate to a hourly frequency, you have to override the default logrotate.timer unit.
Execute systemctl edit logrotate.timer and insert the following overrides:
[Timer]
OnCalendar=
OnCalendar=hourly
AccuracySec=1m
Then run systemctl reenable --now logrotate.timer to activate the changes.
The empty OnCalender option resets the previous defined values.
The default logroate.timer has set the AccuracySec option to one hour. Unfortunately resetting it with an empty value is not possible so it has to be set to one minute manually.
You will need to add the job in your crontab
crontab -e
And then add a job that runs every hour 14 minutes past,
14 * * * * /usr/sbin/logrotate /home/sammy/logrotate.conf --state /home/sammy/logrotate-state
Taken from : https://www.digitalocean.com/community/tutorials/how-to-manage-logfiles-with-logrotate-on-ubuntu-16-04
Also additionally check if the crontab is actually running by doing
service crontab status
if it is stopped you can start it by doing
service crontab start
If you are using Linux, once you install logrotate rpm, it automatically creates a file logrotate under /etc/cron.daily. You can move this file to the /etc/cron.hourly folder. This will then automatically run logrotate hourly.

Can't view any 'dependencies' inside zipkin UI dependencies tab

I do have several services interacting with each other, and all of them, sending traces to openzipkin ( https://github.com/openzipkin/docker-zipkin ).
While i can see the system behaviour in detail , looks like the 'dependencies' tab does not display anything at all.
The trace i check has 6 services, 21 spans and 43 spans, and i believe something should appear.
I'm using latest ( 1.40.1 ) docker-zipkin, with cassandra as storage, and
just connecting to the cassandra instance, can see there's no entry in the dependencies 'table'. why ?
Thanks
Same problem here with the docker images using Cassandra (1.40.1, 1.40.2, 1.1.4).
This is a problem specific to using Cassandra as the storage tier. Mysql and the in-memory storage generate the dependency graph on-demand as expected.
There are references to the following project to generate the Cassandra graph data for the UI to display.
https://github.com/openzipkin/zipkin-dependencies-spark
This looks to be superseded by ongoing work mentioned here
https://github.com/openzipkin/zipkin-dependencies-spark/issues/22
If storage type is other than inline storage, for zipkin dependencies graph you have to start separate cron job/scheduler which reads the storage database and builds the graph. Because zipkin dependencies is separate spark job .
For reference : https://github.com/openzipkin/docker-zipkin-dependencies
I have used zipkin with elastic search as storage type. I will share the steps for setting up the zipkin dependencies with elastic search and cron job for the same:
1.cd ~/
2. curl -sSL https://zipkin.io/quickstart.sh | bash -s io.zipkin.dependencies:zipkin-dependencies:LATEST zipkin-dependencies.jar
3. touch cron.sh or vi cron.sh
4. paste this content :
STORAGE_TYPE=elasticsearch ES_HOSTS=https:172.0.0.1:9200 ES_HTTP_LOGGING=BASIC ES_NODES_WAN_ONLY=true java -jar zipkin-dependencies.jar
5.chmode a+x cron.sh //make file executable
6.crontab -e
window will open paste below content
0 * * * * cd && ./cron.sh //every one hour it will start the cron job if you need every 5 minutes change the commmand to '*/5 * * * * cd && ./cron.sh'
7. to check cron job is schedule run commant crontab -l
Other solution is to start a separate service and run the cron job
using docker
Steps to get the latest zipkin-dependencies jar try running given
command on teminal
cd /zipkindependencies // where your Dockerfile is available
curl -sSL https://zipkin.io/quickstart.sh | bash -s io.zipkin.dependencies:zipkin-dependencies:LATEST
you will get jar file at above mention directory
Dockerfile
FROM openjdk:8-jre-alpine
ENV STORAGE_TYPE=elasticsearch
ENV ES_HOSTS=http://172.0.0.1:9200
ENV ES_NODES_WAN_ONLY=true
ADD crontab.txt /crontab.txt
ADD script.sh /script.sh
COPY entry.sh /entry.sh
COPY zipkin-dependencies.jar /
RUN chmod a+x /script.sh /entry.sh
RUN /usr/bin/crontab /crontab.txt
CMD ["/entry.sh"]
EXPOSE 8080
entry.sh
#!/bin/sh
# start cron
/usr/sbin/crond -f -l 8
script.sh
#!/bin/sh
java ${JAVA_OPTS} -jar /zipkin-dependencies.jar
crontab.txt
0 * * * * /script.sh >> /var/log/script.log

How to run cron in Docker container from Ruby image

I've tried setting up cron to run in my Docker container, but without success thus far.
This is the cron-related parts of the Dockerfile:
FROM ruby:2.2.2
# Add crontab file in the cron directory
RUN apt-get install -y rsyslog
ADD crontab /etc/cron.d/hello-cron
# Give execution rights on the cron job
RUN chmod +x /etc/cron.d/hello-cron
# Create the log file to be able to run tail
RUN touch /var/log/cron.log
# Run the command on container startup
RUN service cron start
When I log on to the container instance, cron appears to be running:
$ service cron status
cron is running.
And /etc/cron.d has my job:
$ cat /etc/cron.d/hello-cron
* * * * * root echo "Hello world" >> /var/log/cron.log 2>&1
But nothing is appended to /var/log/cron.log, so it doesn't appear to run.
If I then, from within the container, runs $ cron it registers my hello-cron file and the log file will have "Hello world" appended every minute.
Your analysis is correct, the cron jobs are not running. This happens because normally, and by best practices, the container only runs a single process, such as Apache, NGINX, etc. - it does not run any of the normal operating system daemons such as crond.
No crond means, there is nothing that would read or execute your crontab.
There are several possibilities to solve this, but no perfect solution that I know of.
The worst one is to actually install crond, along with something like supervisord. It makes your container dramatically more complex.
You can create a separate container that runs nothing but cron. Mount whatever you need from the other containers as volumes. This is generally the recommended option, but it has limitations. The cron container needs to know a lot about the internals of your other containers, and the cron jobs don't execute in the same context as the rest of the containers.
You can create a cron job on the host, and have it execute scripts in the containers with docker exec. That works well, but creates a dependency between host and container. It may also not work at all if you don't have access to the host's operating system (for instance, in a hosted situation, or if a different team manages the host).

Resources