Envoy Laravel run forever - laravel

i'm using bitbucket pipeline to deploy and run some artisan command,
but there is a problem that make me headache, when artisan command failed, envoy show the error/Exception, but not continue to run next envoy task.it's keep show me the exception till i kill the php process in vps server (using kill/pkill command)
here is my envoy
#task('start_check_log', ['on' => 'web'])
cd /home/deployer/mywork/laravel/
nohup bash -c "php artisan serve --env=dusk.local 2>&1 &" && sleep 2
curl -vk http://localhost:8000 &
php artisan check_log
sudo kill $(sudo lsof -t -i:8000)
php artisan cache:clear
php artisan config:clear
#endtask
php artisan check_log just to check the log file, i want to check if error occurred, but when error comes up, envoy stuck on that error.

I've resolved this problem, this is just my stupid, I 've to add command pipe in other to envoy continue the task php artisan check_log && sleep 2 and the envoy continue the process

Related

Display Laravel artisan command output (command called from another command)

I am writing a console command. This command also calls another command.
Basically say: php artisan command:one. So inside command one, I call php artisan command:two.
They both have interactions ($this->info()) stating the progress or state of the current operations. But when I run php artisan command:one I can't see this displayed info from php artisan command:two, though php artisan command:two has its own output info and progress state.
How do I ensure to see the progress and states from php artisan command:two which is called in php artisan command:one?
Using Artisan::call() doesn't redirect called command's output to original command's output.
To call another Artisan command and save its output you should use $this->call() from your command.

Cipher exception when deploying Laravel to Elastic Beanstalk

Ok I'm starting to lose my mind here. When I deploy my app to elastic beanstalk I get this error:
[2017-12-15 17:50:18] Tylercd100\LERN.CRITICAL: RuntimeException was thrown! The only supported ciphers are AES-128-CBC and AES-256-CBC with the correct key lengths.
To be clear I deploy my app source without dependencies installed and with APP_KEY not set, I'm leaving the dependency installation to elastic beanstalk which installs them during deployment.
In my aws .config file I have defined deployment commands as follows:
---
commands:
00init:
command: "sudo yum install gcc-c++"
01init:
command: "rm -f amazon-elasticache-cluster-client.so"
02init:
command: "wget https://s3.amazonaws.com/php-amazon-elasticache-cluster-client-7-1/amazon-elasticache-cluster-client.so"
03init:
command: "sudo mv amazon-elasticache-cluster-client.so /usr/lib64/php/7.1/modules/"
04init:
command: "echo \"extension=amazon-elasticache-cluster-client.so\" | sudo tee /etc/php-7.1.d/50-memcached.ini"
05init:
command: "sudo /etc/init.d/httpd restart"
container_commands:
00permissions:
command: "find * -type d -print0 | xargs -0 chmod 0755"
01permissions:
command: "find . -type f -print0 | xargs -0 chmod 0644"
02permissions:
command: "chmod -R 775 storage bootstrap/cache"
03cache:
command: "php artisan cache:clear"
04key:
command: "php artisan key:generate"
05cache:
command: "php artisan config:cache"
06cache:
command: "php artisan route:cache"
07optimize:
command: "php artisan optimize"
These commands are running during deployment to aws without any error.
When I go and check .env directly on the virtual machine the APP_KEY is set as it should be considering the commands above.
Yet I get the cipher error.
Assuming you set APP_KEY in elasticbeanstalk configuration page in dashboard, there are two things that I would like to point out.
1- When php artisan config:cache is run in container_commands, it caches file paths as /var/app/ondeck/... This causes runtime errors while laravel trying to access the cached files.
2- Cipher error occurs when laravel cannot access the APP_KEY value from your .env file. If a line like APP_KEY=${APP_KEY} exists in your .env file, that is the main cause of the error. You assume that APP_KEY value is going to be read from the environment configuration made in the dashboard. However, environment variables have not been set by the beanstalk yet somehow when your commands or container_commands are running. You can solve this issue my sourcing environment variables by yourself by including below command in your commands or files.
source /opt/elasticbeanstalk/support/envvars
e.g.
"/opt/elasticbeanstalk/hooks/appdeploy/post/91_config_cache.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
source /opt/elasticbeanstalk/support/envvars
echo "Running php artisan config:cache"
cd /var/app/current
php artisan config:cache
echo "Finished php artisan config:cache"

Can't run a command from Dockerfile or Docker Compose

I am trying to create a nginx-laravel-mysql stack of docker containers using [laradock][1], a free docker-compose plugin for laravel.
To make it work, I have to run php artisan key:generate either from my local environment, or from within a running container (both bad practices).
I tried adding command: /bin/bash -c "php artisan key:generate" to my docker-compose.yml file. This causes an exit; when I run docker-compose ps, I see laradock_workspace_1 /bin/bash -c nohup php art ... Exit 1. Adding nohup causes the same result. In fact, any command I run here causes an exit
On to the Dockerfile. If I add RUN php artisan key:generate (or any variation of it), I get this:
ERROR: Service 'workspace' failed to build: The command '/bin/sh -c php artisan key:generate' returned a non-zero code: 1
If I run that same command as CMD or ENTRYPOINT, even with nohup, it runs and generates the key, but exits:
docker-compose ps says: laradock_workspace_1 /bin/sh -c nohup php artis... Exit 0
I can add restart: always to docker-compose.yml, but that begets a vicious cycles of key generate, exit, restart.
Any ideas how to execute this command (or any command) from Dockerfile or docker-compose.yml without exiting?
EDIT: to answer #dnephin's question: php artisan key:generate adds a hash to the /.env file and adds a value to a php array. It just takes that command, no input. When I run docker-compose run workspace php artisan key:generate, I get Could not open input file: artisan.
Strangely, when I run docker-compose run workspace pwd, I see the correct path to my laravel files (and I can see all of them if I run docker-compose exec workspace bash, but when I try to run docker-compose run workspace ls, I see nothing. It's like the files aren't there.

How will I run queue listener of Laravel 5.2 in background?

In my project I am using database queue and executing this queue by using command
php artisan queue:listen
in composer and it is working. But in my windows server, there are many projects that using queues so many windows of composer are open. It is quite inconvenient. Is this possible to run this command in background without composer window open?
YOu can use the command but it will work only until you logout or restart
nohup php artisan queue:work --daemon &
The trailing ampersand (&) causes process start in the background, so you can continue to use the shell and do not have to wait until the script is finished.
See nohup
nohup - run a command immune to hangups, with output to a non-tty
This will output information to a file entitled nohup.out in the directory where you run the command. If you have no interest in the output you can redirect stdout and stderr to /dev/null, or similarly you could output it into your normal laravel log. For example
nohup php artisan queue:work --daemon > /dev/null 2>&1 &
nohup php artisan queue:work --daemon > app/storage/logs/laravel.log &
But you should also use something like Supervisord to ensure that the service remains running and is restarted after crashes/failures.
Running queue:listen with supervisord
supervisord is a *nix utility to monitor and control processes below is a portion of /etc/supervisord.conf that works well.
Portion of supervisord.conf for queue:listen
[program:l5beauty-queue-listen]
command=php /PATH/TO/l5beauty/artisan queue:listen
user=NONROOT-USER
process_name=%(program_name)s_%(process_num)d
directory=/PATH/TO/l5beauty
stdout_logfile=/PATH/TO/l5beauty/storage/logs/supervisord.log
redirect_stderr=true
numprocs=1
You’ll need to replace the /PATH/TO/ to match your local install. Likewise, the user setting will be unique to your installation.

Running Laravel queues automatically [duplicate]

This question already has answers here:
How to keep Laravel Queue system running on server
(20 answers)
Closed 4 years ago.
I have implemented Laravel queue.The thing is i have to run the command php artisan queue:listen every time.Is there any way that the jobs get executed automatically without running any command.
Here's a one-liner to put into your crontab (let it run, let say, every 5 minutes):
cd /path/to/your/project && jobs -l | grep `cat queue.pid` || { nohup /usr/bin/php artisan queue:listen & echo $! > queue.pid; }
two variables here:
1. /path/to/your/project -- is your Laravel project root. Effectively, the folder, where php artisan would work;
2. /usr/bin/php -- path to PHP executable on the server (which php)
Yes, if you use Linux you can use for example supervisor which will run php artisan queue:listen (you need to add this command to supervisor configuration file) and it will make sure all the time this command is running.

Resources