laravel horizon elastic beanstalk supervisord having errors - laravel

I a have been trying to setup horizon to run inside an elastic beanstalk instance, and it looks like it works.
supervisorctl status
gets me the following output
horizon RUNNING pid 3435, uptime 0:06:31
but the log prints a successful start then loops an error message and the queue is not working
Horizon started successfully.
sh: line 0: exec: : not found
sh: line 0: exec: : not found <------ This prints like an infinite loop
the horizon queue does work if I start it manually from the ssh shell.
here are my configuration files for EBS
001-cron.config
files:
"/etc/cron.d/mycron":
mode: "000644"
owner: root
group: root
content: |
* * * * * root php /var/app/current/artisan schedule:run
002-horizon.config
container_commands:
01-copy_systemd_file:
command: "easy_install supervisor"
02-enable_systemd:
command: "mkdir -p /etc/supervisor/conf.d"
03-copy_horizon_config:
command: "cp .ebextensions/horizon.conf /etc/supervisor/conf.d/horizon.conf"
cwd: "/var/app/ondeck"
04-copy_supervidor_config:
command: "cp .ebextensions/supervisord.conf /etc/supervisord.conf"
cwd: "/var/app/ondeck"
05-touch_log:
command: "mkdir -p /var/log/supervisor/ && touch /var/log/supervisor/supervisord.log"
06-run_supervisor:
command: "/usr/local/bin/supervisord -c /etc/supervisord.conf || true"
07-run_process:
command: "/usr/local/bin/supervisorctl restart horizon:*"
08-get_status:
command: "/usr/local/bin/supervisorctl status"
horizon.conf
[program:horizon]
process_name=%(program_name)s
command=php /var/app/current/artisan horizon
autostart=true
autorestart=true
user=ec2-user
redirect_stderr=true
stdout_logfile=/var/log/horizon.log
supervisord.conf
; supervisor config file
[unix_http_server]
file=/var/run/supervisor.sock ; (the path to the socket file)
chmod=0700 ; sockef file mode (default 0700)
[supervisord]
logfile=/var/log/supervisor/supervisord.log ; (main log file;default $CWD/supervisord.log)
pidfile=/var/run/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
childlogdir=/var/log/supervisor ; ('AUTO' child log dir, default $TEMP)
; the below section must remain in the config file for RPC
; (supervisorctl/web interface) to work, additional interfaces may be
; added by defining them in separate rpcinterface: sections
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///var/run/supervisor.sock ; use a unix:// URL for a unix socket
; The [include] section can just contain the "files" setting. This
; setting can list multiple files (separated by whitespace or
; newlines). It can also contain wildcards. The filenames are
; interpreted as relative to this file. Included files *cannot*
; include files themselves.
[include]
files = /etc/supervisor/conf.d/*.conf
; Change according to your configurations

In your horizon.conf file, try specifying the full path to php:
/usr/bin/php
Like so:
[program:horizon]
process_name=%(program_name)s
command=/usr/bin/php /var/app/current/artisan horizon
autostart=true
autorestart=true
user=root
redirect_stderr=true
stdout_logfile=/var/log/horizon.log
Also note that I am using user=root.

Related

Supervisor socket error: Unlinking stale socket /tmp/supervisor.sock

I installed Supervisor on a shared Debian server. When I run:
supervisord -c supervisord.conf
I get this error continuously untill I kill it:Unlinking stale socket /tmp/supervisor.sock
When I run supervisorctl status I get:
unix:///tmp/supervisor.sock no such file
My supervisord.conf file like this (I didn`t change anything):
[unix_http_server]
file=/tmp/supervisor.sock ; (the path to the socket file)
;chmod=0700 ; socket file mode (default 0700)
[supervisord]
logfile=/tmp/supervisord.log ; (main log file;default $CWD/supervisord.log)
logfile_maxbytes=50MB ; (max main logfile bytes b4 rotation;default 50MB)
logfile_backups=10 ; (num of main logfile rotation backups;default 10)
loglevel=info ; (log level;default info; others: debug,warn,trace)
pidfile=/tmp/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
nodaemon=false ; (start in foreground if true;default false)
minfds=1024 ; (min. avail startup file descriptors;default 1024)
minprocs=200 ; (min. avail process descriptors;default 200)
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock ; use a unix:// URL for a unix socket
[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /path/to/app/artisan queue:work --sleep=3 --tries=3
autostart=true
autorestart=true
;user=
numprocs=8
redirect_stderr=true
stdout_logfile= /path/to/app/worker.log
When I cd to /tmp to look for the socket file, the socket looks like this:
supervisor.sock.824804. The six-digit number gets somehow generated randomly by the server after I run the supervisord -c supervisord.confcommand. Do I have to consider this six-digit in the supervisord.config file? And how to I do this, since it gets generated randomly? I have already installed and run Supervisor on a macOS Mojave too and there were no problemes and the socket file just locked like supervisor.sock. Thank for any help and suggestion in advance!
I set nodaemon = true in supervisord.conf and that fixed my problem.
unlink /tmp/supervisor.sock
then I fixed it "Unlinking stale socket /tmp/supervisord.sock".

is there a way to ask supervisor to source vars from a file?

I have a supervisor conf file that need a lot of environment variables::
$ cat /etc/supervisor/conf.d/aaa_staging.conf
[program:aaa_staging]
environment=
API_HOST="https://aaa-api-staging.zettauser.com/api/",
CLOUD_INSTANCE_NAME=media-server-xx-xx-xx-xx,
CLOUD_APPLICATION=media-server,
CLOUD_APP_COMPONENT=none,
CLOUD_ZONE=a,
CLOUD_REGION=b,
CLOUD_PRIVATE_IP=none,
CLOUD_PUBLIC_IP=xx.xx.xx.xx,
CLOUD_PUBLIC_IPV6=xx.xx.xx.xx.xx.xx,
CLOUD_PROVIDER=c
command=/opt/aaa-staging/bin/gunicorn 'aaa.app:app' --workers 4 --bind 0.0.0.0:5046 --timeout 1200
user=user
autostart=true
autorestart=true
redirect_stderr=true
directory=/opt/aaa-staging/lib64/python3.7/site-packages/aaa/
stdout_logfile=/var/log/aaa-staging_app
$
Variables originaly live in a conf file::
$ cat /etc/aaa.conf
API_HOST="https://aaa-api-staging.zettauser.com/api/"
CLOUD_INSTANCE_NAME=media-server-xx-xx-xx-xx
CLOUD_APPLICATION=media-server
CLOUD_APP_COMPONENT=none
CLOUD_ZONE=a
CLOUD_REGION=b
CLOUD_PRIVATE_IP=none
CLOUD_PUBLIC_IP=xx.xx.xx.xx
CLOUD_PUBLIC_IPV6=xx.xx.xx.xx.xx.xx
CLOUD_PROVIDER=c
$
Is there a way to inform supervisor about the fact it has to source
Variables from /etc/aaa.conf ?

Supervisor not identifying Bash environment variables

I have an issue when using environment variable defined in .bash_profile. When I run this command :
sudo supervisord -c /etc/supervisord.conf.
It always return :
format string '%(ENV_Jas_name)s' for 'program:laravel-process-user-data-queue.user' contains names ('ENV_Jas_name') which cannot be expanded. .
I am trying to set the user and absolute path of command from the environment variables. Config files :
1. supervisord.conf file:
[unix_http_server]
file=/tmp/supervisor.sock ; the path to the socket file
[supervisord]
enviroment=Jas_root3=%(ENV_nnsms3_root)s,Jas_name=%(ENV_Jas_name$)
logfile=/tmp/supervisord.log ; main log file; default $CWD/supervisord.log
logfile_maxbytes=50MB ; max main logfile bytes b4 rotation; default 50MB
logfile_backups=10 ; # of main logfile backups; 0 means none, default 10
loglevel=info ; log level; default info; others: debug,warn,trace
pidfile=/tmp/supervisord.pid ; supervisord pidfile; default supervisord.pid
nodaemon=false ; start in foreground if true; default false
minfds=1024 ; min. avail startup file descriptors; default 1024
minprocs=200 ; min. avail process descriptors;default 200
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[program:laravel-process-user-data-queue]
process_name=%(program_name)s_%(process_num)02d
command=sudo php %(ENV_Jas_root)s/app/artisan queue:listen --tries=3 --queue=high,default,low
autostart=true
autorestart=true
user=%(ENV_Jas_name)s
numprocs=1
redirect_stderr=true
stderr_logfile=%(ENV_Jas_root)s/app/storage/logs/supervisor/processuserdata.err.log
stdout_logfile=%(ENV_Jas_root)s/app/storage/logs/supervisor/processuserdata.out.log
2. .Bash_profile :
source ~/.profile
export PATH=/Applications/MAMP/bin/php/php7.0.19/bin:$PATH
export PATH="/usr/local/sbin:$PATH"
PATH="/Library/Frameworks/Python.framework/Versions/3.6/bin:${PATH}"
export PATH
export Jas_root="/Users/JasemAl-sadi/Desktop/SMS/Local websites/Jas_main"
export Jas_user="Sony"
I am running in macOS High Sierra with Mamp pro with PHP 7.1 with supervisor 3.3.3
**I saw most of the related issues, but nothing work :( **

Why supervisor is processing one job more than once?

I've been working on a Laravel (5.3) project in which I have to crawl data from multiple websites.
So, for that I set up queue jobs and configured a supervisor for them.
Everything works fine until I configure the supervisor to run only 1 process.
In file
/etc/supervisor/conf.d/laravel-worker.conf
numprocs=1
When I assign the numprocs value to more than 1, it behaves weird that supervisor executes the jobs for 2 times or 3 times.
Followings are my versions:
Ubuntu 14.04.2 LTS
Laravel 5.3
supervisord 3.0b2
Followings are my configurations:
Configurations for following file are
/etc/supervisor/supervisor.conf
; supervisor config file
[unix_http_server]
file=/var/run/supervisor.sock ; (the path to the socket file)
chmod=0700 ; sockef file mode (default 0700)
[supervisord]
logfile=/var/log/supervisor/supervisord.log ; (main log file;default $CWD/supervisord.log)
pidfile=/var/run/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
childlogdir=/var/log/supervisor ; ('AUTO' child log dir, default $TEMP)
; the below section must remain in the config file for RPC
; (supervisorctl/web interface) to work, additional interfaces may be
; added by defining them in separate rpcinterface: sections
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///var/run/supervisor.sock ; use a unix:// URL for a unix socket
; The [include] section can just contain the "files" setting. This
; setting can list multiple files (separated by whitespace or
; newlines). It can also contain wildcards. The filenames are
; interpreted as relative to this file. Included files *cannot*
; include files themselves.
[include]
files = /etc/supervisor/conf.d/*.conf
Configurations for following file are
/etc/supervisor/conf.d/laravel-worker.conf
[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/myapp/artisan queue:work database --sleep=3 --tries=3
autostart=true
autorestart=true
user=hmabuzar
numprocs=25
redirect_stderr=true
stderr_events_enabled=true
stderr_logfile=/var/www/myapp/storage/logs/worker.error.log
stdout_logfile=/var/www/myapp/storage/logs/worker.log

How do I make cloud-init startup scripts run every time my EC2 instance boots?

I have an EC2 instance running an AMI based on the Amazon Linux AMI. Like all such AMIs, it supports the cloud-init system for running startup scripts based on the User Data passed into every instance. In this particular case, my User Data input happens to be an Include file that sources several other startup scripts:
#include
http://s3.amazonaws.com/path/to/script/1
http://s3.amazonaws.com/path/to/script/2
The first time I boot my instance, the cloud-init startup script runs correctly. However, if I do a soft reboot of the instance (by running sudo shutdown -r now, for instance), the instance comes back up without running the startup script the second time around. If I go into the system logs, I can see:
Running cloud-init user-scripts
user-scripts already ran once-per-instance
[ OK ]
This is not what I want -- I can see the utility of having startup scripts that only run once per instance lifetime, but in my case these should run every time the instance starts up, like normal startup scripts.
I realize that one possible solution is to manually have my scripts insert themselves into rc.local after running the first time. This seems burdensome, however, since the cloud-init and rc.d environments are subtly different and I would now have to debug scripts on first launch and all subsequent launches separately.
Does anyone know how I can tell cloud-init to always run my scripts? This certainly sounds like something the designers of cloud-init would have considered.
In 11.10, 12.04 and later, you can achieve this by making the 'scripts-user' run 'always'.
In /etc/cloud/cloud.cfg you'll see something like:
cloud_final_modules:
- rightscale_userdata
- scripts-per-once
- scripts-per-boot
- scripts-per-instance
- scripts-user
- keys-to-console
- phone-home
- final-message
This can be modified after boot, or cloud-config data overriding this stanza can be inserted via user-data. Ie, in user-data you can provide:
#cloud-config
cloud_final_modules:
- rightscale_userdata
- scripts-per-once
- scripts-per-boot
- scripts-per-instance
- [scripts-user, always]
- keys-to-console
- phone-home
- final-message
That can also be '#included' as you've done in your description.
Unfortunately, right now, you cannot modify the 'cloud_final_modules', but only override it. I hope to add the ability to modify config sections at some point.
There is a bit more information on this in the cloud-config doc at
https://github.com/canonical/cloud-init/tree/master/doc/examples
Alternatively, you can put files in /var/lib/cloud/scripts/per-boot , and they'll be run by the 'scripts-per-boot' path.
In /etc/init.d/cloud-init-user-scripts, edit this line:
/usr/bin/cloud-init-run-module once-per-instance user-scripts execute run-parts ${SCRIPT_DIR} >/dev/null && success || failure
to
/usr/bin/cloud-init-run-module always user-scripts execute run-parts ${SCRIPT_DIR} >/dev/null && success || failure
Good luck !
cloud-init supports this now natively, see runcmd vs bootcmd command descriptions in the documentation (http://cloudinit.readthedocs.io/en/latest/topics/examples.html#run-commands-on-first-boot):
"runcmd":
#cloud-config
# run commands
# default: none
# runcmd contains a list of either lists or a string
# each item will be executed in order at rc.local like level with
# output to the console
# - runcmd only runs during the first boot
# - if the item is a list, the items will be properly executed as if
# passed to execve(3) (with the first arg as the command).
# - if the item is a string, it will be simply written to the file and
# will be interpreted by 'sh'
#
# Note, that the list has to be proper yaml, so you have to quote
# any characters yaml would eat (':' can be problematic)
runcmd:
- [ ls, -l, / ]
- [ sh, -xc, "echo $(date) ': hello world!'" ]
- [ sh, -c, echo "=========hello world'=========" ]
- ls -l /root
- [ wget, "http://slashdot.org", -O, /tmp/index.html ]
"bootcmd":
#cloud-config
# boot commands
# default: none
# this is very similar to runcmd, but commands run very early
# in the boot process, only slightly after a 'boothook' would run.
# bootcmd should really only be used for things that could not be
# done later in the boot process. bootcmd is very much like
# boothook, but possibly with more friendly.
# - bootcmd will run on every boot
# - the INSTANCE_ID variable will be set to the current instance id.
# - you can use 'cloud-init-per' command to help only run once
bootcmd:
- echo 192.168.1.130 us.archive.ubuntu.com >> /etc/hosts
- [ cloud-init-per, once, mymkfs, mkfs, /dev/vdb ]
also note the "cloud-init-per" command example in bootcmd. From it's help:
Usage: cloud-init-per frequency name cmd [ arg1 [ arg2 [ ... ] ]
run cmd with arguments provided.
This utility can make it easier to use boothooks or bootcmd
on a per "once" or "always" basis.
If frequency is:
* once: run only once (do not re-run for new instance-id)
* instance: run only the first boot for a given instance-id
* always: run every boot
One possibility, although somewhat hackish, is to delete the lock file that cloud-init uses to determine whether or not the user-script has already run. In my case (Amazon Linux AMI), this lock file is located in /var/lib/cloud/sem/ and is named user-scripts.i-7f3f1d11 (the hash part at the end changes every boot). Therefore, the following user-data script added to the end of the Include file will do the trick:
#!/bin/sh
rm /var/lib/cloud/sem/user-scripts.*
I'm not sure if this will have any adverse effects on anything else, but it has worked in my experiments.
please use the below script above your bash script.
example: here m printing hello world to my file
stop instance before adding to userdata
script
Content-Type: multipart/mixed; boundary="//"
MIME-Version: 1.0
--//
Content-Type: text/cloud-config; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cloud-config.txt"
#cloud-config
cloud_final_modules:
- [scripts-user, always]
--//
Content-Type: text/x-shellscript; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="userdata.txt"
#!/bin/bash
/bin/echo "Hello World." >> /var/tmp/sdksdfjsdlf
--//
I struggled with this issue for almost two days, tried all of the solutions I could find and finally, combining several approaches, came up with the following:
MyResource:
Type: AWS::EC2::Instance
Metadata:
AWS::CloudFormation::Init:
configSets:
setup_process:
- "prepare"
- "run_for_instance"
prepare:
commands:
01_apt_update:
command: "apt-get update"
02_clone_project:
command: "mkdir -p /replication && rm -rf /replication/* && git clone https://github.com/awslabs/dynamodb-cross-region-library.git /replication/dynamodb-cross-region-library/"
03_build_project:
command: "mvn install -DskipTests=true"
cwd: "/replication/dynamodb-cross-region-library"
04_prepare_for_apac:
command: "mkdir -p /replication/replication-west && rm -rf /replication/replication-west/* && cp /replication/dynamodb-cross-region-library/target/dynamodb-cross-region-replication-1.2.1.jar /replication/replication-west/replication-runner.jar"
run_for_instance:
commands:
01_run:
command: !Sub "java -jar replication-runner.jar --sourceRegion us-east-1 --sourceTable ${TableName} --destinationRegion ap-southeast-1 --destinationTable ${TableName} --taskName -us-ap >/dev/null 2>&1 &"
cwd: "/replication/replication-west"
Properties:
UserData:
Fn::Base64:
!Sub |
#cloud-config
cloud_final_modules:
- [scripts-user, always]
runcmd:
- /usr/local/bin/cfn-init -v -c setup_process --stack ${AWS::StackName} --resource MyResource --region ${AWS::Region}
- /usr/local/bin/cfn-signal -e $? --stack ${AWS::StackName} --resource MyResource --region ${AWS::Region}
This is the setup for DynamoDb cross-region replication process.
If someone wants to do this on CDK, here's a python example.
For Windows, user data has a special persist tag, but for Linux, you need to use MultiPart User data to setup cloud-init first. This Linux example worked with cloud-config (see ref blog) part type instead of cloud-boothook which requires a cloud-init-per (see also bootcmd) call I couldn't test out (eg: cloud-init-pre always).
Linux example:
# Create some userdata commands
instance_userdata = ec2.UserData.for_linux()
instance_userdata.add_commands("apt update")
# ...
# Now create the first part to make cloud-init run it always
cinit_conf = ec2.UserData.for_linux();
cinit_conf .add_commands('#cloud-config');
cinit_conf .add_commands('cloud_final_modules:');
cinit_conf .add_commands('- [scripts-user, always]');
multipart_ud = ec2.MultipartUserData()
#### Setup to run every time instance starts
multipart_ud.add_part(ec2.MultipartBody.from_user_data(cinit_conf , content_type='text/cloud-config'))
#### Add the commands desired to run every time
multipart_ud.add_part(ec2.MultipartBody.from_user_data(instance_userdata));
ec2.Instance(
self, "myec2",
userdata=multipart_ud,
#other required config...
)
Windows example:
instance_userdata = ec2.UserData.for_windows()
# Bootstrap
instance_userdata.add_commands("Write-Output 'Run some commands'")
# ...
# Making all the commands persistent - ie: running on each instance start
data_script = instance_userdata.render()
data_script += "<persist>true</persist>"
ud = ec2.UserData.custom(data_script)
ec2.Instance(
self, "myWinec2",
userdata=ud,
#other required config...
)
Another approach is to use #cloud-boothook in your user data script. From the docs:
Cloud Boothook
Begins with #cloud-boothook or Content-Type: text/cloud-boothook.
This content is boothook data. It is stored in a file under /var/lib/cloud and then executed immediately.
This is the earliest "hook" available. There is no mechanism provided for running it only one time. The boothook must take care
of this itself. It is provided with the instance ID in the environment
variable INSTANCE_ID. Use this variable to provide a once-per-instance
set of boothook data.

Resources