When try to run pm2 stop command by id is not working in jenkins pipeline(shell script)
pm2 stop 8
When i have run the same command directly in machine its working fine. Try to run in jenkins pipeline its not working, facing error. Could you suggest solution for this. Id details is fine and also i have tried with app name thats also not working
pm2 stop Testing Webhook
Error details:
> Building on master in workspace
> /Jenkins/workspace/Test-Job [Test-Job] $ /bin/sh
> -xe /tmp/jenkins206901532071257719.sh
> + cd /usr/lib/Webhook
> + pm2 stop 8 [PM2] Spawning PM2 daemon with pm2_home=/var/lib/jenkins/.pm2 [PM2] PM2 Successfully daemonized [PM2]
> Applying action stopProcessId on app [8](ids: 8) [PM2][ERROR] Process
> 8 not found
> ┌──────────┬────┬──────┬─────┬────────┬─────────┬────────┬─────┬─────┬──────────┐
> │ App name │ id │ mode │ pid │ status │ restart │ uptime │ cpu │ mem │
> watching │
> └──────────┴────┴──────┴─────┴────────┴─────────┴────────┴─────┴─────┴──────────┘
> Use `pm2 show <id|name>` to get more details about an app Build step
> 'Execute shell' marked build as failure [BFA] Scanning build for known
> causes...
>
> [BFA] No failure causes found [BFA] Done. 0s Finished: FAILURE
Related
Firstly I set up 1 worker for 1 job. Deploying my backend for the API.
I'm using "shell" as the executer. The .toml file is this structure:
concurrent = 1
check_interval = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "Gitlab Runner Josere Backend"
url = "https://gitlab.com/"
token = "sOmEtOkeN1G0Tfr0mGitlab"
executor = "shell"
[runners.custom_build_dir]
[runners.cache]
[some mumbo jumbo about caching.. does it matter?]
With some struggle I got that to work fine with this .gitlab-ci.yml:
deploy-production:
stage: deploy
variables:
GIT_STRATEGY: clone
script:
- cd ./lumen/
- composer install
- sudo cp -r $CI_PROJECT_DIR/lumen/. /home/josere/public_html/api/
- sudo cp /home/josere/env/.env /home/josere/public_html/api
This is the execution output of the runner:
Running with gitlab-runner 15.2.1 (32fc1585)
on Gitlab Runner Josere backend 9JxGrMLz
Preparing the "shell" executor
00:00
Using Shell executor...
Preparing environment
00:00
Running on ####[my server]#####...
Getting source from Git repository
00:03
Fetching changes with git depth set to 50...
Initialized empty Git repository in /home/gitlab-runner/builds/9JxGrMLz/0/paspalas/josere/.git/
Created fresh repository.
... etc ...
In my frontend repo in Gitlab I went to the same runners settings. I can't really install a runner (its allready running I guess) but I can copy the token that is shown there.
Then I changed my .toml file according to this doc from gitlab (https://docs.gitlab.com/runner/fleet_scaling/):
concurrent = 2
check_interval = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "Gitlab Runner Josere Backend"
url = "https://gitlab.com/"
token = "sOmEtOkeN1G0Tfr0mGitlab"
executor = "shell"
[runners.custom_build_dir]
[runners.cache]
[some mumbo jumbo about caching.. does it matter?]
[[runners]]
name = "Gitlab Runner Josere Frontend"
url = "https://gitlab.com/"
token = "TheOtherTokenThatIgotFromFrontendRepo!"
executor = "shell"
[runners.custom_build_dir]
[runners.cache]
[some mumbo jumbo about caching.. does it matter?]
notice I do keep the executor on "shell".
this is the script for .gitlab-ci.yml that goes in the root of the frontend repo:
deploy-production:
stage: deploy
variables:
GIT_STRATEGY: clone
script:
- npm install
- npm run build
- sudo cp -r $CI_PROJECT_DIR/public/. /home/josere/public_html/
But when I commit my frontend and check the (failing) log for the worker it writes this:
Running with gitlab-runner 15.4.0~beta.5.gdefc7017 (defc7017)
on green-1.shared.runners-manager.gitlab.com/default JLgUopmM
Preparing the "docker+machine" executor
00:06
Using Docker executor with image ruby:2.5 ...
Pulling docker image ruby:2.5 ...
Using docker image sha256:27d###mumbojumbo###2383b for ruby:2.5 with digest ruby#sha256:ecc3###mumbojumbo###444b ...
Preparing environment
00:00
Running on runner-jlguopmm-project-39467125-concurrent-0 via runner-jlguopmm-shared-1665674167-6adf45bf...
Getting source from Git repository
00:02
$ eval "$CI_PRE_CLONE_SCRIPT"
Fetching changes with git depth set to 20...
Initialized empty Git repository in /builds/paspalas/josere-frontend/.git/
Created fresh repository.
Checking out c39e641c as materialui...
Skipping Git submodules setup
Executing "step_script" stage of the job script
00:01
Using docker image sha256:27d###mumbojumbo###3b for ruby:2.5 with digest ruby#sha256:ecc3e###mumbojumbo####44b ...
$ sudo npm install
/bin/bash: line 126: sudo: command not found
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: exit code 1
clearly it seems multiple things go wrong, to start with: why is it using docker while I explicitly tell it to be "shell"?
I fixed the issue. Even though the docs of GitLab differentiates between "runner" and "job", the gitlab-runner calls these "registrations" of a "runner". I did the (extra) registeration like so:
- gitlab-runner register
[filling in info]
- nano /etc/gitlab-runner/config.toml
[check if you have the additional runner]
- gitlab-runner run
[according to gitlab-runner help this is to fire up multiple runners]
- gitlab-runner list
[ now you can check if all "runners" (jobs) are running]
I have GitLab runner installed on a MacOS using homebrew. The runner configuration is located under ${HOME}/.gitlab-runner/config.toml and service configuration located under ${HOME}/Library/LaunchAgents/homebrew.mxcl.gitlab-runner.plist is the default.
Below is my gitlab-runner toml configuration file.
concurrent = 1
check_interval = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "MY_RUNNER_NAME"
url = "https://gitlab.com/"
token = "MY_GITLAB_TOKEN"
executor = "shell"
shell = "bash"
environment = ["PATH=/usr/local/opt/openjdk#8/bin:/usr/local/opt/ruby#2.7/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/Apple/usr/bin", "LC_ALL=en_US.UTF-8", "LANG=en_US.UTF-8"]
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
The runner is connected correctly to Gitlab.com, it executes the steps correctly, but it got stuck in the Uploading artifacts step until the build times out.
Below is my Uploading artifacts logs.
Uploading artifacts...
Runtime platform arch=amd64 os=darwin pid=16690 revision=775dd39d version=13.8.0
android/app/build/outputs/bundle/release/app-release.aab: found 1 matching files and directories
ERROR: Job failed: execution took longer than 1h0m0s seconds
As a debugging step, I tried to run the gitlab-runner artifacts-uploader locally to trace the behavior using this command gitlab-runner --debug --log-level debug artifacts-uploader --verbose --id MY_BUILD_ID --token MY_GITLAB_TOKEN --url https://gitlab.com/ --path android/app/build/outputs/bundle/release/app-release.aab --expire-in "1 week".
Below is my gitlab-runner artifacts-uploader logs.
Runtime platform arch=amd64 os=darwin pid=25259 revision=775dd39d version=13.8.0
android/app/build/outputs/bundle/release/app-release.aab: found 1 matching files and directories
Dialing: tcp gitlab.com:443 ...
It is obvious that the gitlab-runner artifacts-uploader got stuck connecting to gitlab.com:443, and now I am out of ideas on how to trace or solve this issue.
I am relatively new to deploying python web applications but I was trying to deploy my H2O wave app to Heroku but kept running into issues and I couldn't find much help in the documentation.
Everything works fine locally if I start the server using the command (in the SDK for wave):
$ ./waved
2021/01/22 10:26:38 #
2021/01/22 10:26:38 # ┌─────────────────────────┐
2021/01/22 10:26:38 # │ ┌ ┌ ┌──┐ ┌ ┌ ┌──┐ │ H2O Wave
2021/01/22 10:26:38 # │ │ ┌──┘ │──│ │ │ └┐ │ 0.11.0 20210118061246
2021/01/22 10:26:38 # │ └─┘ ┘ ┘ └──┘ └─┘ │ © 2020 H2O.ai, Inc.
2021/01/22 10:26:38 # └─────────────────────────┘
2021/01/22 10:26:38 #
2021/01/22 10:26:38 # {"address":":10101","t":"listen","webroot":"/Users/kenjohnson/Documents/TTT/H2O Wave/wave/www"}
2021/01/22 10:26:39 # {"addr":"127.0.0.1:64065","route":"/tennis-pred","t":"ui_add"}
2021/01/22 10:46:04 # {"host":"http://127.0.0.1:8000","route":"/counter","t":"app_add"}
then in the root directory of the project running:
uvicorn tennis_pred_app:main
For deployment, all I have other than my wave python file is a requirements.txt and a Procfile:
web: uvicorn tennis_pred_app:main --host 0.0.0.0 --port 10101
this is what my foo (tennis_pred_app.py) looks like (simplified):
from h2o_wave import Q, main, app, ui
#app("/tennis-pred")
async def serve(q: Q):
show_form(q)
await q.page.save()
The error I am currently running into is:
2021-01-22T00:28:41.000000+00:00 app[api]: Build started by user x
2021-01-22T00:31:07.040695+00:00 heroku[web.1]: State changed from crashed to starting
2021-01-22T00:31:06.879674+00:00 app[api]: Deploy 1dc65130 by user x
2021-01-22T00:31:06.879674+00:00 app[api]: Release v23 created by user x
2021-01-22T00:31:26.580199+00:00 heroku[web.1]: Starting process with command `uvicorn tennis_pred_app:main --host 0.0.0.0 --port 20819`
2021-01-22T00:31:30.299421+00:00 app[web.1]: INFO: Uvicorn running on http://0.0.0.0:20819 (Press CTRL+C to quit)
2021-01-22T00:31:30.299892+00:00 app[web.1]: INFO: Started parent process [4]
2021-01-22T00:31:46.000000+00:00 app[api]: Build succeeded
2021-01-22T00:32:27.041954+00:00 heroku[web.1]: Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch
2021-01-22T00:32:27.093099+00:00 heroku[web.1]: Stopping process with SIGKILL
2021-01-22T00:32:27.199933+00:00 heroku[web.1]: Process exited with status 137
2021-01-22T00:32:27.242769+00:00 heroku[web.1]: State changed from starting to crashed
You don't get to choose your port on Heroku. Instead, Heroku assigns you a port via the PORT environment variable.
Change your Procfile from
web: uvicorn foo:main --host 0.0.0.0 --port 10101
to
web: uvicorn foo:main --host 0.0.0.0 --port $PORT
See this blog post for the exact guide.
More explanation for why the other answer is generally correct but does not apply to H2O Wave:
If you look at architecture, you may notice there are actually 2 servers included. The first is a python (uvicorn) one that is used for your Wave app - this is not exposed to the outside directly but uses a kind of a proxy server - the second server. This second (Golang) server communicates directly with a browser (outside) and thus should be started on the $PORT Heroku assigned to you, e.g. via the H2O_WAVE_LISTEN env variable - see other config options.
I try to deploy the Laravel application with deployer. The process fails.
Below you will see the responses
✔ Executing task deploy:shared
✔ Executing task deploy:writable
➤ Executing task deploy:vendors
✔ Executing task deploy:failed
In Client.php line 99:
The command "cd /gopanel/sites/xxx_net/public/xaio/releases/1 && /usr/bin/php /usr/local/bin/composer install --verbose --prefer-dist
--no-progress --no-interaction --no-dev --optimize-autoloader" failed.
Exit Code: 1 (General error)
Host Name: xx.xxxx.net
================
Loading composer repositories with package information
Installing dependencies from lock file
Dependency resolution completed in 0.000 seconds
Analyzed 166 packages to resolve dependencies
Analyzed 463 rules to resolve dependencies
Package operations: 103 installs, 0 updates, 0 removals
Installs: symfony/polyfill-ctype:v1.11.0, phpoption/phpoption:1.5.0, vlucas/phpdotenv:v3.3.3, symfony/css-selector:v4.2.4,
y/psysh:v0.9.9, laravel/tinker:v1.0.8, intervention/image:2.4.2, league/glide:1.5.0, owen-it/laravel-auditing:v9.0.0, predis/predis:v1.1.1,
What is wrong?
Output is not enough to understand. You must run your command in verbose mode by adding -vvv to end of command.
Like this:
user#local:~$ dep deploy host -vvv
I am executing the chef cookbook recipe in local mode, I have placed the template .erb file under cookbooks templates folder.
It giving error and Chef::Exceptions::CookbookNotFound
attaching execution log
PS C:\chef-repo> chef-client -z -r "recipe[my_cookbook::test1]"
Starting Chef Client, version 12.18.31
resolving cookbooks for run list: ["my_cookbook::test1"]
Synchronizing Cookbooks:
- test (0.1.0)
Installing Cookbook Gems:
Compiling Cookbooks...
Converging 1 resources
Recipe: test::test1
* template[c:\test-template.txt] action create
================================================================================
Error executing action `create` on resource 'template[c:\test-template.txt]'
================================================================================
Chef::Exceptions::CookbookNotFound
----------------------------------
Cookbook test not found. If you're loading test from another cookbook, make sure you configure the dependency in you
r metadata
Resource Declaration:
---------------------
# In c:/chef-repo/.chef/local-mode-cache/cache/cookbooks/test/recipes/test1.rb
1: template "c:\\test-template.txt" do
2: source "test-template.txt.erb"
3: mode '0755'
4: variables({
5: test: node['cloud']['public_ipv4']
6: })
7: end
Compiled Resource:
------------------
# Declared in c:/chef-repo/.chef/local-mode-cache/cache/cookbooks/test/recipes/test1.rb:1:in `from_file'
template("c:\test-template.txt") do
action [:create]
retries 0
retry_delay 2
default_guard_interpreter :default
source "test-template.txt.erb"
variables {:test=>"1.1.1.1"}
declared_type :template
cookbook_name "test"
recipe_name "test1"
mode "0755"
path "c:\\test-template.txt"
end
Platform:
---------
x64-mingw32
Running handlers:
[2017-03-08T12:32:35+00:00] ERROR: Running exception handlers
Running handlers complete
[2017-03-08T12:32:35+00:00] ERROR: Exception handlers complete
Chef Client failed. 0 resources updated in 05 seconds
[2017-03-08T12:32:35+00:00] FATAL: Stacktrace dumped to c:/chef-repo/.chef/local-mode-cache/cache/chef-stacktrace.out
[2017-03-08T12:32:35+00:00] FATAL: Please provide the contents of the stacktrace.out file if you file a bug report
[2017-03-08T12:32:35+00:00] FATAL: Chef::Exceptions::CookbookNotFound: template[c:\test-template.txt] (test::test1 line
1) had an error: Chef::Exceptions::CookbookNotFound: Cookbook test not found. If you're loading test from another cookbo
ok, make sure you configure the dependency in your metadata
test1.rb
template "c:\\test-template.txt" do
source "test-template.txt.erb"
mode '0755'
variables({
test: node['cloud']['public_ipv4']
})
end
My chef-repo tree :
C:.
├───.chef
│ └───local-mode-cache
│ └───cache
│ └───cookbooks
│ └───test
│ ├───attributes
│ ├───recipes
│ └───templates
| |___test-template.txt.erb
├───cookbooks
│ └───my_cookbook
│ ├───attributes
│ ├───definitions
│ ├───files
│ │ └───default
│ ├───libraries
│ ├───providers
│ ├───recipes
│ ├───resources
│ └───templates
│ └───default
| |___test-template.txt.erb
├───data_bags
│ └───example
├───environments
├───nodes
└───roles
Just a guess but here's what I think is wrong:
The template resource looks for a source file in c:/chef-repo/.chef/local-mode-cache/cache/cookbooks/test/templates/test-template.txt.erb.
With those log line:
resolving cookbooks for run list: ["my_cookbook::test1"]
...
Converging 1 resources
Recipe: test::test1
This makes me think taht either:
Your actual cookbook template is at "c:/chef-repo/.chef/local-mode-cache/cache/cookbooks/my_cookbook/templates/test-template.txt.erb" and your metadata.rb use the wrong name attribute.
You have a typo somewhere in the template name or location while playing with a wrapper cookbook.