Run Laravel queued jobs as windows service [duplicate] - laravel

I need to set up a PHP script as a windows service.
I need it to run regardless of which user is logged in, and on system start up - so it sounds like a windows service is best, but am happy to hear other suggestions.
(This script runs continuously, it's not a "run every 5 mins" thing I could use the Scheduled Task Manager for.)
http://support.microsoft.com/kb/251192 covers using the sc.exe program to install your service.
But from what I've read, I need to have a wrapper round the PHP script to accept the special commands from the windows service manager. Can anyone help with this?

Maybe the Resource Kit Tools (specifically srvany.exe) can help you here. MSDN: How To Create A User-Defined Service and possibly this hint for 2008 Server should help you setup any executable as a service. (I've successfully used this on Windows 2003 Server, Windows 2008 Server and on Windows XP Professional [other Resource Kit, though])
You'd create a bat containing php your-script.php, wrap that with srvany.exe and voila, the script is started once the machine loads the services.
srvany.exe should handle those start/stop/restart calls you'd expect a daemon to execute. It would load your executable on start, kill the process on stop, do both on restart. So you don't have to worry about this part. You might want to check if a register_shutdown_function() can help identify when your service process is killed.
You can even define dependencies to other services (say some database or some such).
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\THENAMEOFYOURSERVICE]
"DependOnService"="DEPENDONTHIS"
replace THENAMEOFYOURSERVICE with the name you gave your service and DEPENDONTHIS with the name of the service to depend on (say "Postgres9.0" or something). Save that file to dependency.reg and load it with regedit /s dependency.reg. (Or doubleclick it in explorer…)

We used FireDaemon for this task, it doesn't require wrapper scripts etc. Unfortunately it's not freeware.

You can run php on command line by giving different parameters and also the script-file as parameter. If you add the whole line you need into the service configuration it should run. So you are also able to try the device before creating the service.
If the php-script is outside your web-route perhaps you should add the folder to the PATH-Variable of windows.

I found this but haven't tried it myself. PHP actually comes with some functions to do this:
http://uk.php.net/manual/en/book.win32service.php
Here are some examples:
http://uk.php.net/manual/en/win32service.examples.php
<?php
if ($argv[1] == 'run') {
win32_start_service_ctrl_dispatcher('dummyphp');
while (WIN32_SERVICE_CONTROL_STOP != win32_get_last_control_message()) {
# do your work here.
# try not to take up more than 30 seconds before going around the loop
# again
}
}
?>

After a few days... i found this magnific option!
He build an .exe that recive the service options and works fine!
https://superuser.com/questions/628176/php-cgi-exe-as-a-windows-service/643676#643676
the command correct:
sc create FOO binPath= "service.exe \"C:\php\php-cgi.exe -b
127.0.0.1:9000 -c C:\php\php.ini"\" type= own start= auto error= ignore DisplayName= "FOO php"

NSSM - the Non-Sucking Service Manager is also a solution, and then
nssm install PHP php-cgi.exe -b 127.0.0.1:9000 -c C:\php\php.ini

Loop in shell.
In php loop add loop counter and exit after hour for restarting process.
memory usage controll
reconect to db each 100 seconds
Shell script do simple loop and each for iteration create new logfile
PHP and shell script:
ini_set('memory_limit', '300M');
$loopCnt = 0;
while(true) {
/**
* Maximal time limit for loop execution
*/
set_time_limit(10);
$loopCnt ++;
/**
* each hour finishing
*/
if($loopCnt > 60 * 60){
exit;
}
usleep(self::SLEEP_MICROSECONDS);
if ($loopCnt % 60 === 0) { //log every 60 seconds memory usage
$this->out('memory usage: '.memory_get_usage());
//reconnect DB to avoid timeouts and server gone away errors
Yii::$app->db->close();
Yii::$app->db->open();
}
if (memory_get_usage() > self::MEMORY_LIMIT) {
$this->out('memory limit reached: '.self::MEMORY_LIMIT . ' actual: ' . memory_get_usage() . ' exit');
exit;
}
/**
* do work
*/
}
}
// bat file
set loopcount=1000000
:loop
echo Loop %DATE% %TIME% %loopcount%
set t=%TIME: =0%
php cwbouncer.php > C:\logs\cwbouncer_%DATE:~2,2%%DATE:~5,2%%DATE:~8,2%_%t:~0,2%%t:~3,2%%t:~6,2%.log
set /a loopcount=loopcount-1
if %loopcount%==0 goto exitloop
goto loop
:exitloop

Related

Cron Job Running Shell Script to Run Python Not Working

As written in the title, I am having some problem with my cron job script not executing. I am using CentOS 7.
My crontab -e looks like this:
30 0 * * * /opt/abc/efg/cron_jobs.sh >> /opt/abc/logs/cron_jobs.log
My cron_jobs.sh looks like this:
#!/bin/bash
#keep this script in efg folder
#run this daily through crontab -e
#45 0 * * * /opt/abc/efg/cron_job.sh
cd "$(dirname "$0")"
export PYTHONPATH=$PYTHONPATH:`pwd`
#some daily jobs script for abc
date
#send email to users whose keys will expire 7 days later
/usr/local/bin/python2.7 scripts/send_expiration_reminder.py -d 7
#send email to key owners whos keys will expire
/usr/local/bin/python2.7 scripts/send_expiration_reminder.py -d -1
# review user follow status daily task
# Need to use venv due to some library dependencies
/opt/abc/virtualenv/bin/python2.7 scripts/review_user_status.py
So, what I've found is that the log for the cron jobs in /var/logs/cron states that the cron ran at 0:30 am accordingly.
Strangely, I find that /opt/abc/logs/cron_jobs.log empty, and the scripts does not seem to run at all. It used to output some log before I re-inputted the crontab (to re-install the cron jobs), and replaced cron_jobs.sh, so I think the problem might have arose from those actions.
And also, I would like to know if there are any ways to log the error from executing a python script. I have been trying to run /opt/abc/virtualenv/bin/python2.7 scripts/review_user_status.py but it never seem to work as intended (does not run the main function at all), and there is no log output whatsoever.
I tried to run this on a different machine and it works properly, so I am not sure what is wrong with the cron job.
Here is a snippet of the log I got from /var/log/cron to show that the cron called the job:
Mar 22 18:32:01 web41 CROND[20252]: (root) CMD (/opt/abc/efg/cron_jobs.sh >> /opt/abc/logs/cron_jobs.log)
There are a few areas to check if you haven't performed these already,
if your executable permissions set on the script,
chmod +x <python file>
in addition permissions for the user to access the directories.
Run the script manually to test the script works from beginning to end, as the user who will be running the script, will be more realistic.
You can test your crontab schedule by temporarily setting every minute for testing, unlike Windows where you can right, click and click Run.
First, thank you all for the suggestions and heads up. I found out that what was ruining my script is the existence of /r in the line break. Apparently, Linux in general does not accept /r and only accepts /n.
It is because I ftp my files to the machine where the script breaks. On the other hand, it works fine on another machine because I used git pull instead of ftp.
Hope that this info will also be a helpful to others!

perl: long delays during repeated system calls

not sure if this is a perl problem, or a cywin problem, or a Windows problem:
I'm running perl inside cygwin under Windows8. We have a comprensive number of small scripts for individual tasks, and I've recently written a top level script, which repeatedly calls several of these scripts via 'system' calls. All scripts for themselves run flawlessly, however execution is only happening in chunks, i.e. the top level script starts to operate, and after about 10 seconds it stops and the computer is idle for another 10-15 seconds, then starts again for 10 seconds, and so on. Apart from this script the PC is only running the usual Windows background processes, i.e. the top level script is the only process causing significant CPU load.
The script is too long to show here, but essentially consists of stuctures where a few variables are defined in loops and then combined via sprintf strings to call the scripts , just like the following snippet:
(...)
foreach $period (#periods)
{
foreach $wt (#wtlist)
{
foreach $type ('WT', 'Ref')
{
$out=1;
$dir1=0*$sectorwidth;
$dir2=1*$sectorwidth;
$addfile0 = sprintf("%s/files/monthly_recal/%s%s_from%s.%s_%03d.da1", $workingdir_correl, $nameRoot, $wt, $type, $period, $dir1 ) ;
$addfile1 = sprintf("%s/files/monthly_recal/%s%s_from%s.%s_%03d.da1", $workingdir_correl, $nameRoot, $wt, $type, $period, $dir2 ) ;
if (-e $addfile0 && -e $addfile1)
{
$cmd = sprintf ( "perl ../00Bin/add_sort_ts.pl $addfile0 $addfile1 %s/files/monthly_recal/tmp/%s%s_from%s.%s.out%02d 0 $timeStep\n", $workingdir_correl, $nameRoot, $wt, $type, $period, $out ) ;
print ($cmd);
system ( $cmd ) ;
}
}
}
}
(...)
All variables are defined (simple strings or integers) and the individual calls are all working.
When this top level script is running, it's running several loop iterations after another, so I don't think it's a matter of startup delays of the called scripts. To me it looks more as if Windows denies too many system calls in a row. I have other perl scripts withut 'system' calls, which run for 10 minutes without showing this intermittent behaviour.
I have no real clue where to look for, so any suggestion would be appreciated. The whole execution time of the top level script can take several hours, therefore any improvement here would greatly improve efficieny!
--UPDATE: From the discussion to Hakon's answer below it turned out that the problem lies in the shell that is used to run the perl scripts - intermittent operation appears when the code is run from Windows cmd or a non-login shell, but not when run explicitly from a login shell (e.g. when using bash --login or staring mintty -). I will open another thread soon to clarify why this happens... Thanks to all contributors here!
Can you try to simplify you problem a little bit? It will help to locate the problem more easily. For example, the following code running on Windows 10, Strawberry Perl 5.30 (running from CMD not Cygwin) shows no problems:
use strict;
use warnings;
for my $i (1..10) {
system 'cmd.exe /c worker.bat';
}
with a simple worker.bat like:
#echo off
echo %time%
timeout 5 > NUL
echo %time%
The output from running the Perl script is:
10:35:00.71
10:35:05.18
10:35:05.22
10:35:10.17
10:35:10.22
10:35:15.15
10:35:15.19
10:35:20.19
10:35:20.23
10:35:25.15
10:35:25.20
10:35:30.16
10:35:30.21
10:35:35.14
10:35:35.18
10:35:40.16
10:35:40.20
10:35:45.16
10:35:45.21
10:35:50.15
Showing no delays between the system calls.

How to redirect Windows cmd output to a text file?

I am trying to monitor the cmd output of a certain executable .exe and use it for another process running at the same time.
The problem is that all cmd redirecting functions ( '>','>>','<' and '|') would only redirect the output after a successful return of the last command.
What I wanted to do is to generate some kind of streaming log of my cmd.
You can run in your process in background by using
start /b something.exe > somefile.txt
Windows 20H2: Redirection works fine when logged in as true administrator but will NOT work when logged in as a created administrative user. I have used it for years to redirect the output of a multi target backup system using Hobo copy to put the console output in a log file. I have never been able to get it to work successfully in Windows 10 ver 19 or 20 on a created administrative user.
You can prefix the command with "cmd /c" to start a new command prompt, and redirect the output of the command prompt:
cmd /c YOUR CODE > TextFileName.txt
Note : Use ONLY single greater than (>)
Since the output of cmd is going to TextFileName.txt.
Whereas this misses the error output, so you are not able to see : Request to UnKnown timed-out for each failed address.

Beanstalk on Windows: How do I prevent commands running on re-deployment?

I'm trying to take advantage of AWS Elastic Beanstalk's facility to customize the EC2 instances it creates. This requires creating a .config file in the .ebextensions directory.
You can specify a number of commands which should be executed when the application is deployed to an instance. I'm using that to install some msi files, and also to configure EC2 to assign the instance a unique name. This then requires a reboot.
My problem is that I only want these commands to be run when an instance is first deployed. When I deploy a code-only change to existing instances they shouldn't be run.
I've tried using the "test" parameter, which should prevent the command running. I create a file as the last command, and then I check for the presence of that file in the "test" parameter. But it doesn't seem to work.
My config file is like this:
# File structure documented at http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-windows-ec2.html
files:
"C:\\Users\\Public\\EnableEc2SetComputerName.ps1":
source: "[File Source]"
commands:
init-01-ec2setcomputername-enable:
test: cmd /c "if exist C:\\Users\\Public\\initialised (exit 1) else (exit 0)"
command: powershell.exe -ExecutionPolicy Bypass -File "C:\\Users\\Public\\EnableEc2SetComputerName.ps1"
waitAfterCompletion: 0
init-05-reboot-instance:
test: cmd /c "if exist C:\\Users\\Public\\initialised (exit 1) else (exit 0)"
command: shutdown -r # restart to enable EC2 to set the computer name
waitAfterCompletion: forever
init-06-mark-initialised:
test: cmd /c "if exist C:\\Users\\Public\\initialised (exit 1) else (exit 0)"
command: echo initialised > C:\\Users\\Public\\initialised
waitAfterCompletion: 0
Is there an alternative way to accomplish this? Or am I doing something stupid?
On Unix-based systems, there are the touch and test commands (referred to in this answer asking the equivalent question for Unix systems). What's the equivalent in Windows which will work best in this situation?
I think the problem lies in the fact that you are rebooting the machine before you can write the initialized file. You should be able use a bat file which first writes the semaphore, then reboots the instance, and run that .bat file contingently on the existence of semaphore.
You can either download the .bat file with a files:source: directive or compose it in the .config with a files:content: directive.
Otherwise, your test: lines look good (I tested them locally, without a reboot).
Essentially, no. Elastic Beanstalk is an abstraction and looks after the underlying infrastructure for you. You give up a lot of environment control and gain easier deployment. If you research into CloudFormation - in particular the meta-data and cfn-init / cfn-hup, you'll see a very similar construct around the beanstalk files and commands
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-init.html
If you need to do instance customization beyond application customization - then you're possibly using the wrong tool, and having to put clumsy workarounds (until touch/test arrive from AWS) Cloud Formation scripts would probably be a better fit.
I wrote about how to configure windows instances via cloudformation and there's also extensive documentation on Amazon itself.
Given you've done all the hard work around the commands, I think it would be pretty easy to shift to a Cloud Formation script, and plonk the one time startup code into userdata.
**edit - I think you could do it like this though if you went with elastic beanstalk
command: dir initialised || powershell.exe -ExecutionPolicy Bypass -File "C:\\Users\\Public\\EnableEc2SetComputerName.ps1"
I recently ran into a very similiar problem, and utilized the answer from Jim Flanagan above and created a PowerShell script to do this.
# restarts the server if this is not the first deployment
param (
)
$strFileName="C:\Users\Public\fwinitialised.txt"
If (Test-Path $strFileName){
#This file was created on a previous deployment, reboot now.
Restart-Computer -Force
}Else{
# this is a new instance, no need to reboot.
New-Item $strFileName -type file
}
And in the .ebextensions file...
6-reboot-instance:
command: powershell.exe -ExecutionPolicy Bypass -File "C:\\PERQ\\Deployment\\RestartServerOnRedeployment.ps1"
waitAfterCompletion: forever

.bat file not working

I have to call two bat files.
One name cbpp_job and other upload.bat.
In first .bat files, I have called cbppservice.exe and after that I have call upload.bat.
cbpp_job.bat
call d:\csdb_exe\CBPPService.exe
call ftp -n -s:"d:\csdb\Success\upload.bat" xxxx.produrl.com
upload.bat
user XXXXXX
XXXXXXXXX
PUT ZA1P.FTP.CBPP.INTRFACE.GRP(+1) 'ZA1P.FTP.CBPP.INTRFACE.GRP(+1)'
BYE
EXIT
But when I call csdb_job through command prompt it works well. When I scheduled it in task scheduler it only calls cbppservice.exe and it is not doing the ftp.
The operating System is windows server 2008.
If your event viewer doesn't show you why your script is failing, Try modifying cbpp_job.bat to redirect stderr to a log file.
(
d:\csdb_exe\CBPPService.exe
ftp -n -s:"d:\csdb\Success\upload.bat" xxxx.produrl.com
) 2>"c:\csdbtask.log"
Maybe that'll help you figure out why task scheduler is failing.

Resources