I want to deploy a simple app to my ec2 instance but I got this error:
bash: line 0: cd: /home/ubuntu/source: No such file or directory
fetch failed
Deploy failed
1
I don't understand why is there a 'source' directory when i haven't created it on my virtual or local machine. It's like pm2 created it on its own. Can someone explain why is it there and how can I deploy it successfully?
My ecosystem.config.js:
module.exports = {
apps: [{
name: 'puk',
script: 'project/'
}],
deploy: {
production: {
user: 'ubuntu',
host: 'ec2-35-180-119-129.eu-west-3.compute.amazonaws.com',
key: '~/.ssh/id_rsa.pub',
ref: 'origin/master',
repo: 'git#github.com:nalnir/pukinn.git',
path: '/home/ubuntu/',
'post-deploy': 'npm install && pm2 startOrRestart ecosystem.config.js'
}
}
}
Full log after pm2 deploy production command:
--> Deploying to production environment
--> on host ec2-35-180-119-129.eu-west-3.compute.amazonaws.com
○ deploying origin/master
○ executing pre-deploy-local
○ hook pre-deploy
○ fetching updates
○ full fetch
bash: line 0: cd: /home/ubuntu/source: No such file or directory
fetch failed
Deploy failed
1
I have faced the same issue and got this thread, but the above answer/comments are not very helpful for me. There is no helpful document on the PM2 website too. So I do one by one all steps from initial:
Do first setup before calling update command on any existing folder. Because PM2 create their own folder structure: [Current, Source, Shared] (Read here)
pm2 deploy ecosystem.config.js stage setup
When you want to deploy new code then do with the below command:
pm2 deploy ecosystem.config.js stage update --force
Why --force?
You may have some changes in your local system that aren’t pushed inside your git repository, and since the deploy script get the update via git pull they will not be on your server. If you want to deploy without pushing any data, you can append the --force option:
My deploy object in ecosystem.config.js file :
deploy : {
stage : {
// Deploy New: pm2 deploy ecosystem.config.js stage setup
// Update: pm2 deploy ecosystem.config.js stage update --force
user : '_MY_SERVER_USER_NAME_', // remote server username
host : '_MY_REMOTE_SERVER_IP_', // remote server ip
ref : 'origin/stage', // remote repo name
repo : 'git#bitbucket.org:_MY_REPO_SSH_CLONE_URL_.git', // repo url
path : '_REMOTE_DIRECTIVE_', // src root paths like /home/ubuntu/
'pre-deploy-local': '',
'post-deploy' : 'npm install && pm2 reload ecosystem.config.js --only MyAppName',
'pre-setup': ''
}
}
I Hope, It will helpful for others.
script parameter expects the actual script path, not the directory
You should change it to the name of your main script, for example: script: './index.js'
You should also update your deploy.production.path to something like /home/ubuntu/project
As stated in the Ecosystem file reference, script expects the Path of the script to launch
Related
I am building a generated app whith Jhipster.
I run the command to build the images and run the app containerized. I started Docker Desktop on windows 11.
To remind, this is the command: ./gradlew -Pprod bootJar jib
The output after a while is :
Execution failed for task ':jib'.
> com.google.cloud.tools.jib.plugins.common.BuildStepsExecutionException: Build image failed, perhaps you should make sure your credentials for 'registr
y-1.docker.io/library/app2' are set up correctly. See https://github.com/GoogleContainerTools/jib/blob/master/docs/faq.md#what-should-i-do-when-the-registry-responds-with-unauthorized for help
I tried multiple times to log in on docker:
docker login registry-1.docker.io
The login is successful and the config.json of docker content is:
{
"auths": {
"https://index.docker.io/v1/": {},
"registry-1.docker.io": {}
},
"credsStore": "desktop"
}
I'm sure that this is where JIB, by default looks for docker creds, but I can not see any creds here. It looks like the credentials are stored somewhere else, here is the version of Docker: Docker version 20.10.17, build 100c701
Maybe try building offline first, probably a permission issue on the remote repository, which will need to be fixed on hub.docker.com
I am using Gitlab tutorial https://docs.gitlab.com/ee/ci/examples/laravel_with_gitlab_and_envoy/ for deploying Laravel application to my digital ocean server
But when it’s running task two I am getting following errors.
$ ~/.composer/vendor/bin/envoy run deploy --commit="$CI_COMMIT_SHA"
/bin/bash: line 103: /root/.composer/vendor/bin/envoy: No such file or directory
ERROR: Job failed: exit code 1
Try to install envoy in your before_script globally in you composer home directory:
before_script:
- export COMPOSER_HOME=`pwd`/composer && mkdir -pv $COMPOSER_HOME
- composer global require --prefer-dist laravel/envoy=~1.0 --no-interaction --prefer-dist --quiet
After this you can call envoy in your deploy script like this:
- ${COMPOSER_HOME}/vendor/laravel/envoy/envoy run deploy --commit="$CI_COMMIT_SHA"
thank you for the answer i had to add .../envoy/**bin**/envoy in the script what worked for me.
complete code : - ${COMPOSER_HOME}/vendor/laravel/envoy/bin/envoy run deploy --commit="$CI_COMMIT_SHA"
I'm trying to implement docker with jenkins and am not sure if I am on the right track.
Given:
Running jenkins on docker from Windows
Plan on fetching code from github, building the solution, running functional tests, etc on a container somehow
What I've currently done:
(1) Installed Docker on Windows
(2) Successfully launched Jenkins on Docker with the command
"docker run –name myJenkins -p 8080:8080 -p 50000:50000 -v ~/Jenkins:/var/jenkins_home/ jenkins/jenkins:lts"
I believe this step binds the docker volume to my host machine's directory. This allows me to view and access the Jenkins content.
(3) In my host machine's Jenkins directory, I've created a plugin.txt (containing a variety of Jenkins plugins I want installed) and a Dockerfile. The Dockerfile installs the specified plugins in the plugins.txt file.
FROM jenkins/jenkins:lts
COPY plugins.txt /usr/share/jenkins/ref/plugins.txt
RUN /usr/local/bin/install-plugins.sh < /usr/share/jenkins/ref/plugins.txt
(4) In the windows command prompt, I built the Dockerfile with the command "docker build -t new_jenkins_image ."
(5) I stop my current container "myJenkins" and create a new container with the command "docker run –name myJenkins2 -p 8080:8080 -p 50000:50000 -v ~/Jenkins:/var/jenkins_home/ new_jenkins_image". This loads up Jenkins with the newly installed jenkins plugins.
What I'm stuck/confused on
(1) Do I have to create a new container with a new name every time I want to install new jenkins plugins through the Dockerfile? This seems like a manual process as well... There has to be a better way.
(2) I started a basic jenkins pipeline job with the "Pipeline script from SCM" option. I entered in the correct repository URL and credentials but left the "Script Path" blank for now (I do not have a Jenkinsfile yet). When I execute the build, Jenkins did not fetch the code from github.
java.lang.IllegalArgumentException: Empty path not permitted.
at org.eclipse.jgit.treewalk.filter.PathFilter.create(PathFilter.java:80)
at org.eclipse.jgit.treewalk.TreeWalk.forPath(TreeWalk.java:205)
at org.eclipse.jgit.treewalk.TreeWalk.forPath(TreeWalk.java:249)
at org.eclipse.jgit.treewalk.TreeWalk.forPath(TreeWalk.java:281)
at jenkins.plugins.git.GitSCMFile$3.invoke(GitSCMFile.java:171)
at jenkins.plugins.git.GitSCMFile$3.invoke(GitSCMFile.java:165)
at jenkins.plugins.git.GitSCMFileSystem$3.invoke(GitSCMFileSystem.java:193)
at org.jenkinsci.plugins.gitclient.AbstractGitAPIImpl.withRepository(AbstractGitAPIImpl.java:29)
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.withRepository(CliGitAPIImpl.java:72)
at jenkins.plugins.git.GitSCMFileSystem.invoke(GitSCMFileSystem.java:189)
at jenkins.plugins.git.GitSCMFile.content(GitSCMFile.java:165)
at jenkins.scm.api.SCMFile.contentAsString(SCMFile.java:338)
at org.jenkinsci.plugins.workflow.cps.CpsScmFlowDefinition.create(CpsScmFlowDefinition.java:110)
at org.jenkinsci.plugins.workflow.cps.CpsScmFlowDefinition.create(CpsScmFlowDefinition.java:67)
at org.jenkinsci.plugins.workflow.job.WorkflowRun.run(WorkflowRun.java:293)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Finished: FAILURE
I believe it's because the docker container does not have git installed? The container cannot access the Git or MSBuild from my host machine... Do I have to create a new container here to simply fetch the code?
Can someone explain to me what I'm missing or where I went wrong?
From my understanding, the process goes like this: Create new pipeline job -> select pipeline script from scm -> enter repo URL, credentials, branch to build and Jenkinsfile -> Jenkinsfile will execute instructions to compile, test, and deploy.
Where does the Dockerfile come into play here? Is my thought process on the right track?
you need to create container every time if you change/update the image. but it not required to give a new name each time. Did you stopped and removed previously running container? if not so docker gives errors like same name container cannot start. So stop and remove your previous container. and you will be able to start a new container with an updated image.
Yes, you need to install git in the same container to pull the code. it cannot access git on the host machine. But the error you are showing is like validation error. (i mean Jenkins validates input even before trying to pull the code. if you add some fake name it will throw next error like git not found)
Your thought is on correct track. Create new pipeline job -> select pipeline script from scm -> enter repo URL, credentials, branch to build and Jenkinsfile -> Jenkinsfile will execute instructions to compile, test, and deploy.
At the end of the question, you mentioned about different Dockerfile, i assume you are talking about Dockerfile in your repository (git). you can run your pipeline in docker agent. This removes to setup everything on jenkins host means you dont need to install dependencies to run your pipeline code on host, for example, if you are trying to execute some nodejs code in pipe, you need to setup nodejs on Jenkins host before you run the pipe, to get rid of this you can run pipe in container where everything is pre-setup. But I don't think you can use this feature if you are running Jenkins itself in docker. you need to setup Jenkins on the host directly in that case.
Problem: I am having difficulty deploying the Jekyll build folder to an FTP server via Wercker.
I've been using Wercker for continuos integration of a Jekyll site I'm working on. Using the script below, The build process: jekyll build and jekyll doctor appear to be working as intended.
My deploy step should upload the "_site" folder to my FTP server. I'm currently using duleorlovic's ftp-deploy wercker step. It's currently uploading the entire directory, instead of just the build folder.
However, Jekyll uses the /_site folder as the directory for where the site gets built to ... how could I limit my upload to just the /_site build folder?
Thanks.
Current wercker.yml as follows:
# Wercker Configuration
# continuous delivery platform that helps software developers
# build and deploy their applications and microservices
box: ruby
build:
steps:
# Install dependencies
- bundle-install
# Execute jeykyll doctor command to validate the
# site against a list of known issues.
- script:
name: jekyll doctor
code: bundle exec jekyll doctor
- script:
name: jekyll build
code: bundle exec jekyll build --trace
deploy:
steps:
- duleorlovic/ftp-deploy:
destination: $FTP_SERVER
username: $FTP_USERNAME
password: $FTP_PASSWORD
timeout: 15
remote-file: remote.txt
Solved the question.
Apparently Worker offers an environment variable called $WERCKER_OUTPUT_DIR. This directory is the folder that gets piped to the deploy step when the build step passes. If not passed anything, the deploy step just used the root directory (aka not your build folder).
The working wercker.yml contains the jekyll build step as follows:
- script:
name: jekyll build
code: bundle exec jekyll build --trace --destination "$WERCKER_OUTPUT_DIR"
I wasn't able to find much for Wercker docs on the matter, since it seems like thier in transition between versions, but I found the solution in an example of how to use wercker.
You can see the example of using the Output Directory in their guide How Wercker Works.
By passing the cwd: argument to the step you can change the working directory (according to the wercker doc):
deploy:
steps:
- duleorlovic/ftp-deploy:
cwd: _site/
destination: $FTP_SERVER
username: $FTP_USERNAME
password: $FTP_PASSWORD
timeout: 15
remote-file: remote.txt
I have deployed the parse server on heroku (https://github.com/ParsePlatform/parse-server) but can't find anything to deploy the parse dashboard on heroku. Any reference is appreciated!!
You shouldn't have to clone the parse-dashboard repository. Here is a better way using parse-dashboard as a node module.
Create a new node app:
mkdir my-parse-dashboard
cd my-parse-dashboard
npm init
Fill out the details it asks for.
Create a git repository:
git init
Additionally you can push this git repository to a remote server (e.g. Bitbucket). Note this repository should be private since it will contain your master key.
Install the parse-dashboard package:
npm install parse-dashboard --save
Create an index.js file with the following line:
require('parse-dashboard/Parse-Dashboard/index.js');
Create a parse-dashboard-config.json file which looks like this:
{
"apps": [
{
"serverURL": "your parse server url",
"appId": "your app Id",
"masterKey": "your master key",
"appName": "My Parse App"
}
],
"users": [
{
"user":"username",
"pass":"password"
}
]
}
Update your package.json file and add this section (or modify it if it already exists):
"scripts": {
"start": "node ./index.js --config ./parse-dashboard-config.json --allowInsecureHTTP=1"
}
Note: The allowInsecureHTTP flag seems to be required on Heroku. Thanks to #nsarafa for this.
Commit all your changes and merge them into master.
Create a new Heroku app: heroku apps:create my-parse-dashboard
Run git push heroku master to deploy your app to Heroku.
Remember to generate a strong password as your dashboard is accessible to anyone on the internet. And make the dashboard only accessible through SSL else your password will be sent in clear text. Read this tutorial on how to force all traffic over SSL on Heroku with Cloudflare for your domain.
I just managed to get this working. Here are the steps I took.
Clone parse-dashboard to your local machine.
Run npm install inside that directory.
Update package.json and change the "start" script to:
"start": "node ./Parse-Dashboard/index.js --config ./Parse-Dashboard /parse-dashboard-config.json --allowInsecureHTTP=1"
(Thanks to nsarafa's answer above for that).
Edit your .gitignore file and remove the following three lines:
bundles/Parse-Dashboard/public/bundles/Parse-Dashboard/parsedashboard-config.json
Edit your config file in Parse-Dashboard/parse-dashboard-config.json, making sure URLs and keys are correct. Here is an example :
{
"apps": [
{
"serverURL": "https://dhowung-fjird-52012.herokuapp.com/parse",
"appId": "myAppId",
"masterKey": "myMasterKey",
"appName": "dhowung-fjird-40722"
}
],
"users": [
{
"user":"myUserName",
"pass":"Str0ng_?Passw0rd"
}
]
}
Remove the cache from your heroku parse server app :
heroku config:set NODE_MODULES_CACHE=false --app yourHerokuParseServerApp
if we follow the example above
yourHerokuParseServerApp = dhowung-fjird-40722
(Again, thanks to nsarafa).
Add, commit and push your changes.
Deploy to Heroku again using their CLI or the dashboard.
Step 4 was the key for me because I wasn't committing my config file, and it took me a while to realise.
Also, as stated above, make sure you have user logins and passwords in your config file, following the parse-dashboard docs:
PS: on your heroku parse server make sure your SERVER_URL looks like this https://yourHerokuParseServerAppName.herokuapp.com/parse
Update brew brew update
Install heroku-cli brew install heroku-toolbelt
Login via command line with your heroku credentials heroku login
Make sure your app is there heroku list and note YOURHEROKUAPPSNAME containing the parse-dashboard deployment
Tell Heroku to ignore the cache from previous deploys heroku config:set NODE_MODULES_CACHE=false --app YOURHEROKUAPPSNAME
Go to your package.json and change start: node ./Parse-Dashboard/index.js to start node./Parse-Dashboard/index.js --config ./Parse-Dashboard/parse-dashboard-config.json --allowInsecureHTTP=1"
Delete your Procfile rm Procfile
Add, commit and merge to your master branch
Run git push heroku master
The start script inside your package.json overrides whatever you declare inside of the Procfile. This process should enable a clean deploy to Heroku. Please be cautious and generate user logins with strong passwords before performing this deployment per the parse-dashboard documentation.