Can't view any 'dependencies' inside zipkin UI dependencies tab - spring-boot

I do have several services interacting with each other, and all of them, sending traces to openzipkin ( https://github.com/openzipkin/docker-zipkin ).
While i can see the system behaviour in detail , looks like the 'dependencies' tab does not display anything at all.
The trace i check has 6 services, 21 spans and 43 spans, and i believe something should appear.
I'm using latest ( 1.40.1 ) docker-zipkin, with cassandra as storage, and
just connecting to the cassandra instance, can see there's no entry in the dependencies 'table'. why ?
Thanks

Same problem here with the docker images using Cassandra (1.40.1, 1.40.2, 1.1.4).
This is a problem specific to using Cassandra as the storage tier. Mysql and the in-memory storage generate the dependency graph on-demand as expected.
There are references to the following project to generate the Cassandra graph data for the UI to display.
https://github.com/openzipkin/zipkin-dependencies-spark
This looks to be superseded by ongoing work mentioned here
https://github.com/openzipkin/zipkin-dependencies-spark/issues/22

If storage type is other than inline storage, for zipkin dependencies graph you have to start separate cron job/scheduler which reads the storage database and builds the graph. Because zipkin dependencies is separate spark job .
For reference : https://github.com/openzipkin/docker-zipkin-dependencies
I have used zipkin with elastic search as storage type. I will share the steps for setting up the zipkin dependencies with elastic search and cron job for the same:
1.cd ~/
2. curl -sSL https://zipkin.io/quickstart.sh | bash -s io.zipkin.dependencies:zipkin-dependencies:LATEST zipkin-dependencies.jar
3. touch cron.sh or vi cron.sh
4. paste this content :
STORAGE_TYPE=elasticsearch ES_HOSTS=https:172.0.0.1:9200 ES_HTTP_LOGGING=BASIC ES_NODES_WAN_ONLY=true java -jar zipkin-dependencies.jar
5.chmode a+x cron.sh //make file executable
6.crontab -e
window will open paste below content
0 * * * * cd && ./cron.sh //every one hour it will start the cron job if you need every 5 minutes change the commmand to '*/5 * * * * cd && ./cron.sh'
7. to check cron job is schedule run commant crontab -l
Other solution is to start a separate service and run the cron job
using docker
Steps to get the latest zipkin-dependencies jar try running given
command on teminal
cd /zipkindependencies // where your Dockerfile is available
curl -sSL https://zipkin.io/quickstart.sh | bash -s io.zipkin.dependencies:zipkin-dependencies:LATEST
you will get jar file at above mention directory
Dockerfile
FROM openjdk:8-jre-alpine
ENV STORAGE_TYPE=elasticsearch
ENV ES_HOSTS=http://172.0.0.1:9200
ENV ES_NODES_WAN_ONLY=true
ADD crontab.txt /crontab.txt
ADD script.sh /script.sh
COPY entry.sh /entry.sh
COPY zipkin-dependencies.jar /
RUN chmod a+x /script.sh /entry.sh
RUN /usr/bin/crontab /crontab.txt
CMD ["/entry.sh"]
EXPOSE 8080
entry.sh
#!/bin/sh
# start cron
/usr/sbin/crond -f -l 8
script.sh
#!/bin/sh
java ${JAVA_OPTS} -jar /zipkin-dependencies.jar
crontab.txt
0 * * * * /script.sh >> /var/log/script.log

Related

Docker: How to ADD a service via ENV variables?

I have built a Docker Cron Environment to run Cronjobs based on alseambusher/crontab-ui using alpine:3.15.3 & it works great.
For it to work I have had to install a number of things via the Dockerfile, editing it & adding python so it could run a python script, perl for another service, openssl so I could use a Self-signed certificate, etc.
As it stands the Container is a lot bigger, which is fine, but if I am to share the container others won't necessarily want or need the services I have added & will likely need other that I haven't.
I would like to be able to add a command in the ENV of a Docker Compose to add services at startup without having to do a full build each time. I'm sure it would be simpler to add build:>args: & have it rebuild the container each startup, but my goal is to have it add to an image only the services that each user needs & declares in the Docker-Compose with no need to have the files for the build on the system.
I know this will mean a longer startup depending on the services, I'm okay with that.
I know it's normal to run cron on the host & have it call into containers, but cron on Windows WSL has to be manually started every time the WSL starts & is easy to forget about & can't really be automated aside from on startup, & I'd like to do this entirely inside Docker.
How can I add an ENV like SERVICE_INSTALL to have it run in BASH (which is already added in the Dockerfile & present at /bin/bash) at container startup?
Ideally I'd like to be able to add multiple SERVICE_INSTALL lines if at all possible.
Example:
SERVICE_INSTALL1='apk add --update --no-cache python3 && ln -sf python3 /usr/bin/python'
SERVICE_INSTALL2='python3 -m ensurepip'
SERVICE_INSTALL3='apk add --no-cache perl perl-html-parser perl-http-cookies perl-lwp-useragent-determined perl-json perl-json-xs'
Or, if nothing else:
SERVICE_INSTALL=apk add --update --no-cache python3 && ln -sf python3 /usr/bin/python && perl perl-html-parser perl-http-cookies perl-lwp-useragent-determined perl-json perl-json-xs && && wget && curl && nodejs && npm
but then that leaves the problem of installing things through pip or npm.
I have tried adding a command: to the Docker-Compose but every variation I have tried does not work. I'm also concerned with this method as from my understanding a command: replaces the startup script in the container, not adds to it, so that is not ideal, regardless, it doesn't seem like an install command: is possible anyway
I have tried: (Each as a single command: not together)
command:
- BASH apk --update add openssl
- /bin/bash apk --update add openssl
- BASH RUN apk --update add openssl
- /bin/bash RUN apk --update add openssl
- sh apk --update add openssl
- /bin/sh apk --update add openssl
- apk --update add openssl
Each ends with a message along the lines of Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "/bin/bash run apk --update add openssl": stat /bin/bash run apk --update add openssl: no such file or directory: unknown
UPDATE: I discovered a few things trying to get this to work
for command: to work there needs to not be any - before it
anything, even on multiple lines, is considered a single command essentially as though they were all on the same line & have to be separated with an &&
it will repeat the command or show the error of it failing to execute the command & not continue to next until it is completed.
for example the command mkdir -p /test leaves no logs, but the container never actually starts. While portainer says it's running trying to bash into it gives a is restarting, wait until the container is running message
mkdir "-p /test" repeats this message
mkdir: unrecognized option:
BusyBox v1.34.1 (2022-02-02 18:21:20 UTC) multi-call binary.
Usage: mkdir [-m MODE] [-p] DIRECTORY...
Create DIRECTORY
-m MODE Mode
-p No error if exists; make parent directories as needed
3 times 3-4 seconds apart, them 7 seconds, then 8 seconds, then 15 seconds, 27 seconds, 53 seconds, then hits a minute & continues to grow a few seconds each try.
It also returns the same wait for the container to be running message when trying to bash in
mkdir -p "/test" seems to be the correct formatting, it appears to work but leaves no logs & when attempting to bash in it connects, shows the terminal, then exits, attempting to reconnect shows the same container is restarting message, likely because the container stopped once the command was finished & is set to restart: always. commenting out the restart command the container exits.
mkdir -p "/test" followed by a new line with supervisord -c /etc/supervisord.conf (the default start command) has mkdir reporting mkdir: unrecognized option: c
adding "supervisord -c /etc/supervisord.conf" leaves no logs & a restarting container.
reversing the order, with supervisord -c /etc/supervisord.conf 1st has supervisord reporting the error Error: positional arguments are not supported: ['mkdir', '-p', '/test'] For help, use /usr/bin/supervisord -h
bash -c "supervisord -c /etc/supervisord.conf with a new line & && mkdir -p /test with a new line & && mkdir -p /test2" runs with a working container, but no directories created
reversing the order seems to work & creates the directories, with a running container
command:
bash -c "mkdir -p /test
&& mkdir -p /test2
&& supervisord -c /etc/supervisord.conf"
Which indicates that it will run them in order, but only proceeds to the next after the one finishes.
a test confirmed that the same can be done with other dependencies so long as the initial startup is last. I'd rather have the container start 1st, then install the dependencies while it is running as they are not required for the container itself to run, but rather are added for use in the cronjobs that will be running on a schedule, so if the container starts & the dependencies cannot be used for the 1st 2, 3, even 5 or 10 minutes that might only affect their 1st attempt if it happens to be in that time.
This is alright, I now understand better how the command: option works, but it still requires users to know & properly include the default start command. The command: options are also a lot more particular & easy to get wrong, while ENV variables are something every docker user knows, has experience with, & is simpler to implement

Continuously run bash script in Azure Container

I need to run a bash script continuously for indefinite time inside a docker container in Azure via Azure Container Instance service (ACI). My bash script has a while loop that keeps it running and Azure container has OnFailure Property to restart container if fails.
I see that after running Container for about 2 days, Container status is Running. However, the bash script that was running in foreground and sending logs in azure container console seems to be died and no longer sending logs to console. I also see it's not doing what it supposed to do.
How can I reliably keep this bash script running for indefinite time in Azure container?
The bash script which has internal while loop runs as below:
Commands
bash
my-while-loop-script.sh
To solve this issue, I replaced while loop inside my-while-loop-script.sh with a crond to execute a python application as a cron job. below is the line that executes a cron inside my-while-loop-script.sh. this line will execute my-cron.cron contents show below:
./busybox crond -f
To achieve that, I used busybox 1.30.1 tools. To install busybox in your docker:
ADD busybox-1.30.1/ /busybox
WORKDIR /busybox
RUN make defconfig
RUN make
And, you also need to add cron settings to crontabs dir.
RUN mkdir -p /var/spool/cron/crontabs/
# Copy cron settings
ADD my-cron.cron /var/spool/cron/crontabs/root
Sample my-cron.cron looks like just a normal cron file:
* * * * * python my-app.py

How do I setup and use Laravel Scheduling on AWS Elastic Beanstalk?

Scenario
As a fairly new user of Laravel and Elastic Beanstalk I soon found my self in the need to schedule operations, like most of us do.
In the past I had always used simple crontab scheduling for this. So now I stood before a list of questions:
How do I run Laravel code using crontab?
How do I setup crontab in my Elastic Beanstalk environment?
Finding the individual answers to these questions weren't that hard. Combining them and actually getting it all to work however turned out to be a bit tricky, which is why I've decided to share the solution here for others struggling with getting this to work properly.
Environment
Laravel 5.6
PHP 7.1
TL;DR:
See working .ebextentions configuration at the end of answer.
Environment
Laravel 5.6
PHP 7.1
How to I run Laravel code using crontab?
The answers to this question is of course the most obvious and if you're even the slightest in to Laravel you surely know the answer: Scheduling!
I won't bore you with explaining the brilliant thing that is Laravel Scheduling since you can read about it in the documentation yourself.
But the key thing we need to take with us is that Laravel Scheduling uses crontab to execute, as described in the documentation:
* * * * * php /path-to-your-project/artisan schedule:run >> /dev/null 2>&1
Which brings us to the next, and a bit more tricky, question...
How do I setup crontab in my Elastic Beanstalk environment?
At first glance the answer to this question may seem pretty straight forward. I found this in the AWS Knownledge Center: How do I create a cron job on EC2 instances in an Elastic Beanstalk environment?
Here they describe how to setup a cron job on your Elastic Beanstalk EC2 machine using .ebextentions. In short what it does is creating a new file in the directory /etc/cron.d/ in which we put our desired cron job.
Files in this directory is then processed by crontab as the root user. There are some of the traps I walked in to, as I've commented below:
files:
# The name of the file should not contain any dot (.) or dash (-), this can
# cause the script not to run. Underscore (_) is OK.
"/etc/cron.d/mycron":
# This permissions is important so that root user can run the script.
mode: "000644"
# As the file is run by the root user it needs to be the owner of the file.
owner: root
# For consistency it's a good idea to have root as the group aswell.
group: root
# NOTE: We need to explicitly tell the cron job to be run as the root user!
content: |
* * * * * root /usr/local/bin/myscript.sh
# There need to be a new line after the actual cron job in the file.
Once we have stayed clear all of those traps, it's time to put in our Laravel Scheduling cron job from above. That should look something like this:
files:
"/etc/cron.d/schedule_run":
mode: "000644"
owner: root
group: root
content: |
* * * * * root php /path-to-your-project/artisan schedule:run >> /dev/null 2>&1
This won't really work in most cases though. That's because the Laravel Scheduler won't have access to your ENV variables and must noticeably not your database settings.
I found the answer to this here: How to Get Laravel Task Scheduling Working on AWS Elastic Beanstalk Cron
So a big shout out to George Bönnisch; I salute you sir for sharing this!
So with this last piece of the puzzle I was finally able to get the setup to work properly:
Working Solution
File structure:
[Project root]
|-- .ebextensions
| |-- cronjob.config
cronjob.config:
files:
"/etc/cron.d/schedule_run":
mode: "000644"
owner: root
group: root
content: |
* * * * * root . /opt/elasticbeanstalk/support/envvars && /usr/bin/php /var/www/html/artisan schedule:run 1>> /dev/null 2>&1
commands:
remove_old_cron:
command: "rm -f /etc/cron.d/*.bak"
Tip when using Laravel Scheduling on AWS Elastic Beanstalk!
Since one of the key features of Elastic Beanstalk is that it can autoscale and add more servers when needed, you might want to have a look on the new feature in Laravel Scheduling: Running Tasks On One Server.
In many cases you don't want your cron job to be executed on more than just one server. For example if you have a scheduled command for sending emails you don't want those sent multiple times.
NOTE: This requires that you use memcached or redis as your cache engine, as stated in the documentation. If you don't, have a look at the AWS service Elasticache.
NOTE 2: When using onOneServer() you must give the scheduled task a name using the name() method (before calling onOneServer()). Like so:
$schedule->command('my:task')
->name('my:task')
->daily()
->onOneServer();
A simpler approach is to use the new Periodic Tasks feature. Using the .ebextensions for cron jobs may lead to multiple machines running the same job or other race conditions with auto-scaling.
Jobs defined in cron.yaml are loaded only by the Worker environment and are guaranteed to run only by one machine at a time (the leader). It has a nice syncing mechanism to make sure there's no duplication. From the docs:
Elastic Beanstalk uses leader election to determine which instance in your worker environment queues the periodic task. Each instance attempts to become leader by writing to an Amazon DynamoDB table. The first instance that succeeds is the leader, and must continue to write to the table to maintain leader status. If the leader goes out of service, another instance quickly takes its place.
Creating a Cron for a Single or Multiple Workers
Place cron.yaml in the root of the project:
version: 1
cron:
- name: "schedule"
url: "/worker/schedule"
schedule: "* * * * *"
One thing to take into consideration is that in Beanstalk periodic tasks are designed to make an HTTP POST request to a URL in your application that in turn triggers the job you want to run. This is similar to how it also manages queues with SQS.
For Laravel
For Laravel specifically, you may create the routes and controllers to handle each scheduled job. But a better approach is to use Laravel's scheduler and have a single route that you call every minute.
This package will create those routes automatically for you https://github.com/dusterio/laravel-aws-worker
Troubleshooting Permissions
If you are running into trouble with the DynamoDB create Leader Table permissions when triggering a deploy from CodePipeline, it's because the CodePileline service role needs dynamodb:CreateTable. For instructions check these StackOverflow Question
Official Elastic Beanstalk Periodic Tasks Docs
you can use this with amazon Linux 2
using .ebextensions configurations you can run the commands directly
first you need to configure the command in separate file
create file under .ebextensions called cron_job.txt and add this line
* * * * * root . /opt/elasticbeanstalk/deployment/env && /usr/bin/php /var/www/html/artisan schedule:run 1>> /var/www/html/laralog.log 2>&1
notice that the first part different from amazon Linux 2 from amazon Linux 1
it loads the environment variables:
Linux 1 : . /opt/elasticbeanstalk/support/envvars
Linux 2 : . /opt/elasticbeanstalk/deployment/env
and after initialing the command in this separated file
we need to fire it through the init.config file which have the container commands in .ebextensions
we can define it as follow :
container_commands:
03cronjob:
command: 'cat .ebextensions/cron_jobs.txt > /etc/cron.d/cron_jobs && chmod 644 /etc/cron.d/cron_jobs'
and that's it you can try it and find the cron jobs executed successfully.
and you can also read this explained article
https://medium.com/qosoor/the-ultimate-guide-to-setup-cron-jobs-with-laravel-elastic-beanstalk-d497daaca1b0
Hope this is helpful
In AWS ECS we can use this without adding cron in to the container
https://github.com/spatie/laravel-cronless-schedule
This is how you can start the cronless schedule:
php artisan schedule:run-cronless
This is for docker users, had some trouble with this so thought its worth posting.
The cron needs to be added to schedule_run file on the server. However, even if you add a container_name to the Dockerrun.aws.json file it changes it to that plus some extra information and therefore you cannot use the normal service name to run the cron.
So using $(docker ps -qf name=php-fpm) where name is part of the name of your container it will return the ID of the container. My container is called php-fpm.
Here is my working file (.ebextensions/01-cron.config).
files:
"/etc/cron.d/schedule_run":
mode: "000644"
owner: root
group: root
content: |
* * * * * root docker exec -t $(docker ps -qf name=php-fpm) sh -c "php artisan schedule:run" >> /var/log/eb-cron.log 2>&1
commands:
002-remove_old_cron:
command: "rm -f /etc/cron.d/*.bak"
Note: it mite be that the first time this cron runs the container is not up. Since the cron in my example is running each minute it didnt matter too much as by the time it runs the 2nd time, the container is up and working.
After several efforts, I found an alternative way to run a cron job easily. You can run a cron job easily in 3 Steps.
Step 1: Create a route
routes/web.php
Route::get('/cron/run',[HomeController::class, 'cron'])->name('cron');
Step 2: Create a function in the HomeController
public function cron()
{
\Artisan::call("schedule:run");
return 'run cron successful';
}
Step 3:
Run the URL every minute using https://cron-job.org/en/
The cron is not triggered because the location of the env file in the old solutions is wrong. Actually, there is nothing wrong with anything. Below is the command you can use currently. We use it in all projects.
Create a cronjob.config in the .ebextensions folder. Then put these to the inside of the file.
files:
"/etc/cron.d/schedule_run":
mode: "000644"
owner: root
group: root
content: |
* * * * * root . /opt/elasticbeanstalk/deployment/env && /usr/bin/php /var/www/html/artisan schedule:run 1>> /var/www/html/storage/logs/laravel_cron.log 2>&1
commands:
remove_old_cron:
command: "rm -f /etc/cron.d/*.bak"
The accepted answer (of #Niklas) unfortunately didn't work for me.
There's even a more comprehensive explanation of that answer here:
https://medium.com/qosoor/the-ultimate-guide-to-setup-cron-jobs-with-laravel-elastic-beanstalk-d497daaca1b0
But as I said, that didn't work for me.
What worked for me is simpler:
(.ebextensions/cron.config)
files:
"/etc/cron.d/mycron":
mode: "000644"
owner: root
group: root
content: |
* * * * * root /usr/local/bin/myscript.sh
"/usr/local/bin/myscript.sh":
mode: "000755"
owner: root
group: root
content: |
#!/bin/bash
date > /tmp/date
cd /var/www/html/ && php artisan schedule:run >> /dev/null 2>&1
exit 0
commands:
remove_old_cron:
command: "rm -f /etc/cron.d/mycron.bak"
I simply copied the command from Laravel documentation and put it into the config file given by AWS documentation.

How to run cron in Docker container from Ruby image

I've tried setting up cron to run in my Docker container, but without success thus far.
This is the cron-related parts of the Dockerfile:
FROM ruby:2.2.2
# Add crontab file in the cron directory
RUN apt-get install -y rsyslog
ADD crontab /etc/cron.d/hello-cron
# Give execution rights on the cron job
RUN chmod +x /etc/cron.d/hello-cron
# Create the log file to be able to run tail
RUN touch /var/log/cron.log
# Run the command on container startup
RUN service cron start
When I log on to the container instance, cron appears to be running:
$ service cron status
cron is running.
And /etc/cron.d has my job:
$ cat /etc/cron.d/hello-cron
* * * * * root echo "Hello world" >> /var/log/cron.log 2>&1
But nothing is appended to /var/log/cron.log, so it doesn't appear to run.
If I then, from within the container, runs $ cron it registers my hello-cron file and the log file will have "Hello world" appended every minute.
Your analysis is correct, the cron jobs are not running. This happens because normally, and by best practices, the container only runs a single process, such as Apache, NGINX, etc. - it does not run any of the normal operating system daemons such as crond.
No crond means, there is nothing that would read or execute your crontab.
There are several possibilities to solve this, but no perfect solution that I know of.
The worst one is to actually install crond, along with something like supervisord. It makes your container dramatically more complex.
You can create a separate container that runs nothing but cron. Mount whatever you need from the other containers as volumes. This is generally the recommended option, but it has limitations. The cron container needs to know a lot about the internals of your other containers, and the cron jobs don't execute in the same context as the rest of the containers.
You can create a cron job on the host, and have it execute scripts in the containers with docker exec. That works well, but creates a dependency between host and container. It may also not work at all if you don't have access to the host's operating system (for instance, in a hosted situation, or if a different team manages the host).

How do you run a crontab in Cygwin on Windows?

Some cygwin commands are .exe files, so you can run them with the standard Windows Scheduler, but others don't have an .exe extension so can't be run from DOS (it seems like).
For example I want updatedb to run nightly.
How do I make cron work?
You need to also install cygrunsrv so you can set cron up as a windows service, then run cron-config.
If you want the cron jobs to send email of any output you'll also need to install either exim or ssmtp (before running cron-config.)
See /usr/share/doc/Cygwin/cron-*.README for more details.
Regarding programs without a .exe extension, they are probably shell scripts of some type. If you look at the first line of the file you could see what program you need to use to run them (e.g., "#!/bin/sh"), so you could perhaps execute them from the windows scheduler by calling the shell program (e.g., "C:\cygwin\bin\sh.exe -l /my/cygwin/path/to/prog".)
You have two options:
Install cron as a windows service, using cygrunsrv:
cygrunsrv -I cron -p /usr/sbin/cron -a -n
net start cron
Note, in (very) old versions of cron you need to use -D instead of -n
The 'non .exe' files are probably bash scripts, so you can run them via the windows scheduler by invoking bash to run the script, e.g.:
C:\cygwin\bin\bash.exe -l -c "./full-path/to/script.sh"
hat tip http://linux.subogero.com/894/cron-on-cygwin/
Start the cygwin-setup and add the “cron” package from the “Admin” category.
We’ll run cron as a service by user SYSTEM. Poor SYSTEM therefore needs a home directory and a shell. The “/etc/passwd” file will define them.
$ mkdir /root
$ chown SYSTEM:root /root
$ mcedit /etc/passwd
SYSTEM:*:......:/root:/bin/bash
The start the service:
$ cron-config
Do you want to remove or reinstall it (yes/no) yes
Do you want to install the cron daemon as a service? (yes/no) yes
Enter the value of CYGWIN for the daemon: [ ] ntsec
Do you want the cron daemon to run as yourself? (yes/no) no
Do you want to start the cron daemon as a service now? (yes/no) yes
Local users can now define their scheduled tasks like this (crontab will start your favourite editor):
$ crontab -e # edit your user specific cron-table HOME=/home/foo
PATH=/usr/local/bin:/usr/bin:/bin:$PATH
# testing - one per line
* * * * * touch ~/cron
#reboot ~/foo.sh
45 11 * * * ~/lunch_message_to_mates.sh
Domain users: it does not work. Poor cron is unable to run scheduled tasks on behalf of domain users on the machine. But there is another way: cron also runs stuff found in the system level cron table in “/etc/crontab”. So insert your suff there, so that SYSTEM does it on its own behalf:
$ touch /etc/crontab
$ chown SYSTEM /etc/crontab
$ mcedit /etc/crontab
HOME=/root
PATH=/usr/local/bin:/usr/bin:/bin:$PATH
* * * * * SYSTEM touch ~/cron
#reboot SYSTEM rm -f /tmp/.ssh*
Finally a few words about crontab entries. They are either environment settings or scheduled commands. As seen above, on Cygwin it’s best to create a usable PATH. Home dir and shell are normally taken from “/etc/passwd”.
As to the columns of scheduled commands see the manual page.
If certain crontab entries do not run, the best diagnostic tool is this:
$ cronevents
Just wanted to add that the options to cron seem to have changed. Need to pass -n rather than -D.
cygrunsrv -I cron -p /usr/sbin/cron -a -n
Applied the instructions from this answer and it worked
Just to point out a more copy paste like answer ( because cygwin installation procedure is kind of anti-copy-paste wise implemented )
Click WinLogo button , type cmd.exe , right click it , choose "Start As Administrator". In cmd prompt:
cd <directory_where_i_forgot_the setup-x86_64.exe> cygwin installer:
set package_name=cygrunsrv cron
setup-x86_64.exe -n -q -s http://cygwin.mirror.constant.com -P %package_name%
Ensure the installer does not throw any errors in the prompt ... If it has - you probably have some cygwin binaries running or you are not an Windows admin, or some freaky bug ...
Now in cmd promt:
C:\cygwin64\bin\cygrunsrv.exe -I cron -p /usr/sbin/cron -a -D
or whatever full file path you might have to the cygrunsrv.exe and
start the cron as windows service in the cmd prompt
net start cron
Now in bash terminal run
crontab -e
set up you cron entry an example bellow:
#sync my gdrive each 10th minute
*/10 * * * * /home/Yordan/sync_gdrive.sh
# * * * * * command to be executed
# - - - - -
# | | | | |
# | | | | +- - - - day of week (0 - 6) (Sunday=0)
# | | | +- - - - - month (1 - 12)
# | | +- - - - - - day of month (1 - 31)
# | +- - - - - - - hour (0 - 23)
# +--------------- minute
I figured out how to get the Cygwin cron service running automatically when I logged on to Windows 7. Here's what worked for me:
Using Notepad, create file C:\cygwin\bin\Cygwin_launch_crontab_service_input.txt with content no on the first line and yes on the second line (without the quotes). These are your two responses to prompts for cron-config.
Create file C:\cygwin\Cygwin_launch_crontab_service.bat with content:
#echo off
C:
chdir C:\cygwin\bin
bash cron-config < Cygwin_launch_crontab_service_input.txt
Add a Shortcut to the following in the Windows Startup folder:
Cygwin_launch_crontab_service.bat
See http://www.sevenforums.com/tutorials/1401-startup-programs-change.html if you need help on how to add to Startup. BTW, you can optionally add these in Startup if you would like:
Cygwin
XWin Server
The first one executes
C:\cygwin\Cygwin.bat
and the second one executes
C:\cygwin\bin\run.exe /usr/bin/bash.exe -l -c /usr/bin/startxwin.exe
The correct syntax to install cron in cygwin as Windows service is to pass -n as argument and not -D:
cygrunsrv --install cron --path /usr/sbin/cron --args -n
-D returns usage error when starting cron in cygwin:
$
$cygrunsrv --install cron --path /usr/sbin/cron --args -D
$cygrunsrv --start cron
cygrunsrv: Error starting a service: QueryServiceStatus: Win32 error 1062:
The service has not been started.
$cat /var/log/cron.log
cron: unknown option -- D
usage: /usr/sbin/cron [-n] [-x [ext,sch,proc,parc,load,misc,test,bit]]
$
Below page has a good explanation.
Installing & Configuring the Cygwin Cron Service in Windows:
https://www.davidjnice.com/cygwin_cron_service.html
P.S. I had to run Cygwin64 Terminal on my Windows 10 PC as administrator in order to install cron as Windows service.
Getting updatedb to work in cron on Cygwin -- debugging steps
1) Make sure cron is installed.
a) Type 'cron' tab tab and look for completion help.
You should see crontab.exe, cron-config, etc. If not install cron using setup.
2) Run cron-config. Be sure to read all the ways to diagnose cron.
3) Run crontab -e
a) Create a test entry of something simple, e.g.,
"* * * * * echo $HOME >> /tmp/mycron.log" and save it.
4) cat /tmp/mycron.log. Does it show cron environment variable HOME
every minute?
5) Is HOME correct? By default mine was /home/myusername; not what I wanted.
So, I added the entry
"HOME='/cygdrive/c/documents and settings/myusername'" to crontab.
6) Once assured the test entry works I moved on to 'updatedb' by
adding an entry in crontab.
7) Since updatedb is a script, errors of sed and find showed up in
my cron.log file. In the error line, the absolute path of sed referenced
an old version of sed.exe and not the one in /usr/bin. I tried changing my
cron PATH environment variable but because it was so long crontab
considered the (otherwise valid) change to be an error. I tried an
explicit much-shorter PATH command, including what I thought were the essential
WINDOWS paths but my cron.log file was empty. Eventually I left PATH alone and
replaced the old sed.exe in the other path with sed.exe from /usr/bin.
After that updatedb ran to completion. To reduce the number of
permission error lines I eventually ended up with this:
"# Run updatedb at 2:10am once per day skipping Sat and Sun'
"10 2 * * 1-5 /usr/bin/updatedb --localpaths='/cygdrive/c' --prunepaths='/cygdrive/c/WINDOWS'"
Notes: I ran cron-config several times throughout this process
to restart the cygwin cron daemon.

Resources