I am using Gitlab tutorial https://docs.gitlab.com/ee/ci/examples/laravel_with_gitlab_and_envoy/ for deploying Laravel application to my digital ocean server
But when it’s running task two I am getting following errors.
$ ~/.composer/vendor/bin/envoy run deploy --commit="$CI_COMMIT_SHA"
/bin/bash: line 103: /root/.composer/vendor/bin/envoy: No such file or directory
ERROR: Job failed: exit code 1
Try to install envoy in your before_script globally in you composer home directory:
before_script:
- export COMPOSER_HOME=`pwd`/composer && mkdir -pv $COMPOSER_HOME
- composer global require --prefer-dist laravel/envoy=~1.0 --no-interaction --prefer-dist --quiet
After this you can call envoy in your deploy script like this:
- ${COMPOSER_HOME}/vendor/laravel/envoy/envoy run deploy --commit="$CI_COMMIT_SHA"
thank you for the answer i had to add .../envoy/**bin**/envoy in the script what worked for me.
complete code : - ${COMPOSER_HOME}/vendor/laravel/envoy/bin/envoy run deploy --commit="$CI_COMMIT_SHA"
Related
The problem:
Whenever I try to make a request to a Laravel API hosted with Laravel Vapor, I get a 502.
When I view error logs in Cloudwatch, I see this:
As you can see here, the 502 is being returned because there seems either to be a permissions problem with the bootstrap/cache folder or the folder is not present for some reason after deployment.
What I did to try to fix this:
I ensured the folder permissions were correct (the cache folder has 755, and the .gitignore 644).
As you can see I ensured there was a .gitignore ignoring the entire contents of the cache folder.
I tried to dump-autoload and composer update.
I tried deploying from the command line and from git actions
All of the above did not yield any results and the support team tried their best but also could not identify the issue.
The solution??
So, one of my teammates noticed that we only started experiencing this issue after adding the vapor-ui package. I removed it completely from our projects, redeployed, and no more error. The API responds as expected now.
My question is, why would installing vapor-ui cause this issue?
I know that the vapor-ui project itself is a Laravel project so could it be that there is no bootstrap/cache folder in the vapor-ui project?
Here are snippets of my vapor.yml file before removing the vapor-ui:
id: 1234
name: notreal
environments:
production:
domain: notreal.notreal.com
memory: 1024
cli-memory: 512
runtime: docker
network: vapor-network-123
build:
- 'composer install --no-dev'
- 'php artisan vapor-ui:install'
- 'npm ci && npm run prod && rm -rf node_modules'
staging:
domain: sta-notreal.notreal.com
memory: 1024
cli-memory: 512
runtime: docker
network: vapor-network-1647335449
database: notreal-sta
build:
- 'composer install'
- 'php artisan vapor-ui:install'
- 'npm ci && npm run dev && rm -rf node_modules'
development:
domain: dev-notreal.notreal.com
memory: 1024
cli-memory: 512
runtime: docker
network: vapor-network-123
database: notreal-dev
build:
- 'composer install'
- 'php artisan vapor-ui:install'
- 'npm ci && npm run prod && rm -rf node_modules'
I don't think you need this build step:
php artisan vapor-ui:install
You do it locally and then everything is in place already when you deploy
I'm using bitbucket pipeline to deploy my laravel application, when I push to my repo it start to build and it works perfectly until the docker exec command which will send inline command to execute inside the php container, I get the error
bash: line 3: docker: command not found
which is very wired because when I run the command directly on the same server at the same directory it works perfectly, docker is installed on the server and as you can see inside execute.sh docker-compose works with no issues however when running over the pipeline I get the error, notice the pwd to make sure the command executed in the right directory.
bitbucket-pipelines.yml
image: php:7.3
pipelines:
branches:
testing:
- step:
name: Deploy to Testing
deployment: Testing
services:
- docker
caches:
- composer
script:
- apt-get update && apt-get install -y unzip openssh-client
- curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
- composer require phpunit/phpunit
- vendor/bin/phpunit laravel/tests/Unit
- ssh do.server.net 'bash -s' < execute.sh
Inside execute.sh it looks like this :
cd /home/docker/docker-laravel
docker-compose build && docker-compose up -d
pwd
docker exec -ti php sh -c "php helpershell.php"
exit
And the output from bitbucket pipeline build result looks like this :
Successfully built 1218483bd067
Successfully tagged docker-laravel_php:latest
Building nginx
Step 1/1 : FROM nginx:latest
---> 4733136e5c3c
Successfully built 4733136e5c3c
Successfully tagged docker-laravel_nginx:latest
Creating php ...
Creating mysql ...
Creating mysql ... done
Creating php ... done
Creating nginx ...
Creating nginx ... done
/home/docker/docker-laravel
bash: line 3: docker: command not found
I think that part of the reason this is happening is because docker-compose and docker are two separate commands. Just because one works does not mean they both work. Also you might want to check the indentation of your bitbucket-pipelines.yaml file because yaml can be pretty finicky.
See here for sample structure: https://confluence.atlassian.com/bitbucket/configure-bitbucket-pipelines-yml-792298910.html
Are you defining docker as a service in the bitbucket pipeline, according to the documentation, with a top level definitions entry? Like so:
definitions:
services:
docker:
memory: 512 # reduce memory for docker-in-docker from 1GB to 512MB
Alternatively docker is included and ready to use directly in the image the pipeline is running, then you might try removing the service key from your step as that could be conflicting with the docker installed on the image (and since you haven't instantiated the docker service via the top level definitions entry I've posted above, the pipeline may end up in a state where it thinks docker isn't setup.
I want to deploy a simple app to my ec2 instance but I got this error:
bash: line 0: cd: /home/ubuntu/source: No such file or directory
fetch failed
Deploy failed
1
I don't understand why is there a 'source' directory when i haven't created it on my virtual or local machine. It's like pm2 created it on its own. Can someone explain why is it there and how can I deploy it successfully?
My ecosystem.config.js:
module.exports = {
apps: [{
name: 'puk',
script: 'project/'
}],
deploy: {
production: {
user: 'ubuntu',
host: 'ec2-35-180-119-129.eu-west-3.compute.amazonaws.com',
key: '~/.ssh/id_rsa.pub',
ref: 'origin/master',
repo: 'git#github.com:nalnir/pukinn.git',
path: '/home/ubuntu/',
'post-deploy': 'npm install && pm2 startOrRestart ecosystem.config.js'
}
}
}
Full log after pm2 deploy production command:
--> Deploying to production environment
--> on host ec2-35-180-119-129.eu-west-3.compute.amazonaws.com
○ deploying origin/master
○ executing pre-deploy-local
○ hook pre-deploy
○ fetching updates
○ full fetch
bash: line 0: cd: /home/ubuntu/source: No such file or directory
fetch failed
Deploy failed
1
I have faced the same issue and got this thread, but the above answer/comments are not very helpful for me. There is no helpful document on the PM2 website too. So I do one by one all steps from initial:
Do first setup before calling update command on any existing folder. Because PM2 create their own folder structure: [Current, Source, Shared] (Read here)
pm2 deploy ecosystem.config.js stage setup
When you want to deploy new code then do with the below command:
pm2 deploy ecosystem.config.js stage update --force
Why --force?
You may have some changes in your local system that aren’t pushed inside your git repository, and since the deploy script get the update via git pull they will not be on your server. If you want to deploy without pushing any data, you can append the --force option:
My deploy object in ecosystem.config.js file :
deploy : {
stage : {
// Deploy New: pm2 deploy ecosystem.config.js stage setup
// Update: pm2 deploy ecosystem.config.js stage update --force
user : '_MY_SERVER_USER_NAME_', // remote server username
host : '_MY_REMOTE_SERVER_IP_', // remote server ip
ref : 'origin/stage', // remote repo name
repo : 'git#bitbucket.org:_MY_REPO_SSH_CLONE_URL_.git', // repo url
path : '_REMOTE_DIRECTIVE_', // src root paths like /home/ubuntu/
'pre-deploy-local': '',
'post-deploy' : 'npm install && pm2 reload ecosystem.config.js --only MyAppName',
'pre-setup': ''
}
}
I Hope, It will helpful for others.
script parameter expects the actual script path, not the directory
You should change it to the name of your main script, for example: script: './index.js'
You should also update your deploy.production.path to something like /home/ubuntu/project
As stated in the Ecosystem file reference, script expects the Path of the script to launch
I want to setup circle Ci, here's my full config file:
version: 2
jobs:
build:
steps:
- run: sudo apt-get install -y libsqlite3-dev
- run: composer install -n --ignore-platform-reqs
- run: npm install
- run: npm run production
- run: vendor/bin/phpunit
- run:
name: Start Chrome Driver
command: ./vendor/laravel/dusk/bin/chromedriver-linux
background: true
- run:
name: Run Laravel Server
command: php artisan serve
background: true
- run:
name: Run Laravel Dusk Tests
command: php artisan dusk
however I get this error message:
build-agent version 0.0.4142-1bd195a (2017-09-11T09:57:00+0000)
Configuration errors: 2 errors occurred:
Error parsing config file: yaml: line 1: mapping values are not allowed in this context
Cannot find a job named build to run in the jobs: section of your configuration file. This can happen if you have no workflows
defined, or a typo in your 'workflows:' config.
I use AWS RDS database for my production, is it possible that during the testing it uses different database, i.e local? fake some information and then purge the database after the test? This must not affect my production database. Also the issue I have need to be fixed but I don't know what's wrong here
This one might be helpful
config.yml
I am trying to deploy a Laravel PHP Project with Laravel Forge. I have connected to my repository on github correctly. However, when I hit deploy, if I go to the public IP for the site, I just see:
"No input file specified."
on the page.
I do not know why it is exhibiting this behavior.
If I go to the latest deployment log, I see:
/home/forge/.forge/provision-433327.sh: line 1: cd: /home/forge/default/laravel: No such file or directory
fatal: Not a git repository (or any of the parent directories): .git
Composer could not find a composer.json file in /home/forge
To initialize a project, please create a composer.json file as described in the http://getcomposer.org/ "Getting Started" section
Could not open input file: artisan
However, I do have a composer.json file in my laravel folder....
Any ideas? Thank you in advance.
It looks like the /home/forge/default/laravel directory doesn't exist. Could you ssh in and verify that the directory exists?
These are the commands that the default forge deploy script runs:
cd /home/forge/default
git pull origin master
composer install
php artisan migrate --force
It looks like your deploy script is navigating to /home/forge/default/laravel instead.