On ec-2 instance whenever I execute pm2 I get the message...
Spawning PM2 daemon with pm2_home=<home_dir>/.pm2
This occurs with pm2 info, pm2 list, pm2 -h etc.
A bare pm2 will show help.
I can get more response from sudo -i.
It seems something is stopping PM2 from demonising when non-sudo.
This solved the problem in my case:
pm2 delete 0
I had been looking at other answers like reinstalling pm2, installing a previous version, permission issues, and none of them worked or applied to my case.
I'm not positive but I think I had a buggy process that was blocking the spawn. I had used pm2 a couple of days prior when I first logged into my server but I had been running an app that kept crashing and I tried to listen on port 80 and got permission errors.
Ubuntu 18 server machine, Node 12.14.1, NPM 6.13.4, PM2 4.2.3
This usually means that pm2 is running under PID that differs from the one in your .pm2/pm2.pid
To exit from this situation try one of these:
pm2 kill
or
ps aux | grep pm2 and then kill -9 PID found in PM2 vX.X.X: God Daemon
if none of the above help:
pkill node && \
pm2 delete all && \
pm2 flush && \
kill -9 $(head -n 1 /home/$USER/.pm2/pm2.pid) && \
rm -rf /home/$USER/.pm2
After that run pm2 ls or whatever pm2 command you want. That should daemonize pm2 again with the correct PID in .pm2/pm2.pid
EDIT
Another possible reason could be any error during the pm2 init, so if the above doesn't work for you check .pm2/pm2.log for any errors and fix them
Hi I fixed this by using an older version of pm2
npm uninstall -g pm2
npm install -g pm2#3.2.2
I just set -
pm2_home=C:\Users\<Admin/Your Account>\.pm2
in environment variables.
And then
restart your the PC / server.
Related
currently to start my app in production, in an ubuntu screen, I use the command:
nohup ts-node-esm --transpile-only index.ts >/dev/null 2>&1 &
if I use this command it is because I have too many configurations in tsconfig.json and package.json, with ts and other modules, I have not been able to build and make everything work. so I use this command.
I would like to have the app restart automatically after a crash, and maybe even write the error to a log file.
to automatically restart the app I tried:
nodemon
npx nodemon
ts-node-dev
forever
and others, but nothing seems to work, the --transpile-only flag is a must, I can't not use it.
once I understand how to automatically restart the app I would also like to log the error in a file.
EDIT
I managed to process this command:
npx nodemon --ext "ts,json" --exec "node --experimental-specifier-resolution=node --experimental-modules --no-warnings --loader ts-node/esm index.ts"
unfortunately the flag --transpile-only doesn't work:
node: bad option: --transpile-only
EDIT 2
the app crashes after some time, I don't understand why and it doesn't restart automatically.
I found this command, unfortunately always without the --transpile-only flag. Hope the log file will be created and will restart after a crash:
forever start -c "node --experimental-specifier-resolution=node --experimental-modules --no-warnings --loader ts-node/esm index.ts" ./ -o out.log -e err.log -l forever.log
This question is not a duplicate of mariadb server: I can't stop the server with `mysql.server stop`.
I don’t want to run MariaDB at boot so brew services isn’t an option.
MariaDB version is 10.4.11-MariaDB.
Think I found the culprit.
Having a look at the source code of mysql.server (cat /usr/local/bin/mysql.server), I discovered that running mysql.server start runs mysqld_safe as me (whoami) which is what I expected.
Now, I also discovered that running mysql.server stop runs a su_kill function that runs su as mysql which fails because the mysql user doesn’t exist on macOS.
user='mysql'
su_kill() {
if test "$USER" = "$user"; then
kill $* >/dev/null 2>&1
else
su - $user -s /bin/sh -c "kill $*" >/dev/null 2>&1
fi
}
Not sure if I am doing something wrong, but according to the documentation, running mysql.server start is the right way of starting MariaDB on brew installs.
Anyhow, to patch mysql.server stop, run:
cp /usr/local/bin/mysql.server /usr/local/bin/mysql.server.backup
sed -i "" "s/user='mysql'/user=\`whoami\`/g" /usr/local/bin/mysql.server
Originally, whenever I tried mysql.server stop I would get the error:
ERROR! MySQL server PID file could not be found!
At some point, mysql.server stop would just hang.
Exploring #sunknudsen's answer, I cd'ed to the directory:
$ cd /usr/local/bin/
then opened the file:
mysql.server
The code user='mysql' only appears on one line, so I just commented out that line and replaced it with:
185 #user='mysql'
186 user=`whoami`
Now, this is what happens:
~$ mysql.server start
Starting MariaDB
.200804 15:43:28 mysqld_safe Logging to '/usr/local/var/mysql/My-MacBook-Pro-2.local.err'.
200804 15:43:29 mysqld_safe Starting mysqld daemon with databases from /usr/local/var/mysql
SUCCESS!
~$ mysql.server stop
Shutting down MariaDB
. SUCCESS!
~$ mysql.server stop
ERROR! MariaDB server PID file could not be found!
The correct start/stop status is also indicated in System Preferences/MySQL.
I'm running Docker Toolbox on VirtualBox on Windows 10.
I'm having an annoying issue where if I docker exec -it mycontainer sh into a container - to inspect things, the shell will abruptly exit randomly back to the host shell, while I'm typing commands. Some experimenting reveals that it's when I press two letters at the same time (as is common when touch typing) that causes the exit.
The container will still be running.
Any ideas what this is?
More details
Here's a minimal docker image I'm running inside. Essentially, I'm trying to deploy kubernetes clusters to AWS via kops, but because I'm on Windows, I have to use a container to run the kops commands.
FROM alpine:3.5
#install aws-cli
RUN apk add --no-cache \
bind-tools\
python \
python-dev \
py-pip \
curl
RUN pip install awscli
#install kubectl
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
RUN chmod +x ./kubectl
RUN mv ./kubectl /usr/local/bin/kubectl
#install kops
RUN curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
RUN chmod +x kops-linux-amd64
RUN mv kops-linux-amd64 /usr/local/bin/kops
I build this image:
docker build -t mykube .
I run this in the working directory of my the project I'm trying to deploy:
docker run -dit -v "${PWD}":/app mykube
I exec into the shell:
docker exec -it $containerid sh
Inside the shell, I start running AWS commands as per here.
Here's some example output:
##output of previous dig command
;; Query time: 343 msec
;; SERVER: 10.0.2.3#53(10.0.2.3)
;; WHEN: Wed Feb 14 21:32:16 UTC 2018
;; MSG SIZE rcvd: 188
##me entering a command
/ # aws s3 mb s3://clus
##shell exits abruptly to host shell while I'm writing
DavidJ#DavidJ-PC001 MINGW64 ~/git-workspace/webpack-react-express (master)
##container is still running
$ docker ps --all
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
37a341cfde83 mykube "/bin/sh" 5 minutes ago Up 3 minutes gifted_bhaskara
##nothing in docker logs
$ docker logs --details 37a341cfde83
A more useful update
Adding the -D flag gives an important clue:
$ docker -D exec -it 04eef8107e91 sh -x
DEBU[0000] Error resize: Error response from daemon: no such exec
/ #
/ #
/ #
/ #
/ # sdfsdfjskfdDEBU[0006] [hijack] End of stdin
DEBU[0006] [hijack] End of stdout
Also, I've ascertained that what specifically is causing the issue is pressing two letters at the same time (which is quite common when I'm touch typing).
There appears to be a github issue for this here, though this one is for docker for windows, not docker toolbox.
This issue appears to be a bug with docker and windows. See the github issue here.
As a work around, prefix your docker exec command with winpty, which comes with git bash.
eg.
winpty docker exec -it mycontainer sh
Check the USER which is the one you are login with when doing a docker exec -it yourContainer sh.
Its .bahsrc, .bash_profile or .profile might include a command which would explain why the session abruptly quits.
Check also the logs associated to that container (docker logs --details yourContainer) in order to see if that closed session generated anything in stderr.
Reasons I can think of for a process to be killed in your container include:
Pid 1 exiting in the container. This would cause the container to go into a stopped state, but a restart policy could have restarted it. See your docker container inspect output to see if this is happening. This is the most common cause I've seen.
Out of memory on the OS, where the kernel would then kill processes. View your system logs and dmesg to see if this is happening.
Exceeding the container memory limit, where docker would kill the container, possibly restarting it depending on your policy. You would again view docker container inspect but the status will have different details.
Process being killed on the host, potentially by a security tool.
Perhaps a selinux or apparmor policy being violated.
Networking issues. Never encountered it myself, but since docker is a client / server design, there's a potential for a network disconnect to drop the exec session.
The server itself is failing, and you'd see various logs in syslog / dmesg indicating problems it can't recover from.
Running postgreSQL 9.4.5_2 currently
I have tried
pg_ctl stop -W -t 1 -s -D /usr/local/var/postgres -m f
Normally no news is good news but after I will run
pg_ctl status -D /usr/local/var/postgres
and get pg_ctl: server is running (PID: 536)
I have also tried
pg_ctl restart -w -D /usr/local/var/postgres -c -m i
Response message is:
waiting for server to shut down.......................... failed
pg_ctl: server does not shut down
I've also checked my /Library/LaunchDaemons/ to see why the service is starting at login but no luck so far. Anyone have any ideas on where I should check next? Force quit in the activity monitor also isn't helping me any.
Sadly none of the previous answers help me, it worked for me with:
brew services stop postgresql
Cheers
I tried various options; finally, the below command worked.
sudo -u postgres ./pg_ctl -D /your/data/directory/path stop
example
sudo -u postgres ./pg_ctl -D /Library/PostgreSQL/11/data stop
As per the comments, the recommended command is without the ./ when calling pg_ctl:
sudo -u postgres pg_ctl -D /Library/PostgreSQL/11/data stop
Tried sudo and su but no such luck.
Just found this gui
https://github.com/MaccaTech/postgresql-mac-preferences
If anyone can help with the terminal commands that would be very much appreciated, but till then the gui will get the job done.
Had the same issue, I had installed postgres locally and wanted to wrap in a docker container instead.
I solved it pretty radically by 1) uninstalling postgres 2) kill the leftover process on postgres port. If you don't un-install the process restarts and grabs the port again - look at your Brewfile form brew bundle dump to check for a restart_service: true flag.
I reasoned that, as I am using containers, I should not need the local one anyway, but !! attention this will remove postgres from your system.
brew uninstall postgres
...
lsof -i :5432 # this to find the PID for the process
kill - 9 <the PID you found at previous command>
Note: if you still want to used psql you can brew install libpq, and add psql to your PATH (the command output shows you what to add to your .zshrc, or similar)
you can stop the server using this command
{pg_ctl -D /usr/local/var/postgres stop -s -m fast}
Adding onto the solutions already stated :
if you decide to use the pg_ctl command, ensure that you are executing the command as a user with the permissions to access the databases/database server.
this means :
the current logged in user on your terminal should have those permissions
or
first run :
$ sudo su <name_of_database_user>
pg_ctl -D /Library/PostgreSQL/<version_here>/data/ stop
the same goes for the start command.
credit : https://gist.github.com/kingbin/9435292
(essentially hosted a file with the commands on github, saved me some time :^) )
I had a stray docker container running Postgres that I had forgotten about.
I'm using nginx on OS X 10.8. Freshly installed nginx but can't find a way to restart nginx except kill nginx_pid say kill 64116. Wondering if there are better ways to restart nginx.
Found some methods on Google and SO but didn't work:
nginx -s restart
sudo fuser -k 80/tcp ; sudo /etc/init.d/nginx restart
The error message for nginx -s restart is
nginx: [error] open() "/usr/local/var/run/nginx.pid" failed (2: No such file or directory)
Sometimes also get this error msg:
nginx: invalid option: "-s restart"
Try running sudo nginx before starting nginx.
To reload config files:
sudo nginx -s reload
To fully restart nginx:
sudo nginx -s quit
sudo nginx
Details
There is no restart signal for nginx. From the docs, here are the signals that the master process accepts:
SIGINT, SIGTERM Shut down quickly.
SIGHUP Reload configuration, start the new worker process with a new configuration, and gracefully shut down old worker processes.
SIGQUIT Shut down gracefully.
SIGUSR1 Reopen log files.
SIGUSR2 Upgrade the nginx executable on the fly.
SIGWINCH Shut down worker processes gracefully.
Presumably you could send these signals to the process id manually, but the nginx command has the flag nginx -s <signal> that sends signals to the master process for you. Your options are:
stop SIGTERM
quit SIGQUIT
reopen SIGUSR1
reload SIGHUP
No need to futz with the pid manually.
Edit: just realized much of this info was already in comments on the other answers. Leaving this here anyway to summarize the situation.
What is your nginx pid file location? This is specified in the configuration file, default paths specified compile-time in the config script. You can search for it as such:
find / -name nginx.pid 2>/dev/null (must issue while nginx is running)
Solution:
sudo mkdir -p /usr/local/var/run/
ln -s /current/path/to/pid/file /usr/local/var/run/nginx.pid
$ sudo nginx -c /usr/local/etc/nginx/nginx.conf
$ sudo nginx -s reload
Source Link: https://blog.csdn.net/github_33644920/article/details/51733436
Try this:
sudo nginx -s stop
followed by a:
sudo nginx
It seems that nginx keeps track of its state, to if you stop it twice, it will complain. But the above worked for me.
I do it like this:
First kill the progress
ps aux | grep nginx
kill -9 {pid}
Then start nginx
nginx
It works!
As a future resource, you can consult http://wiki.nginx.org/CommandLine
Nginx probably runs as root, so you will need to run a variant of the following command to affect it.
sudo nginx -s stop | reload | quit | reopen
There is usually not much reason to restart Nginx like Apache would need. If you have modified a configuration file, you may just want to the reload option.
check if this directory exists:
/usr/local/var/run
this error can occurs when nginx try to initialise pid file in
localisation that doesn't exist.
There is a bug here. Depending on whether nginx is running while you modify/restart apache and/or modify nginx configs it is possible for this file (which is essentially just a process ID pointer) to be destroyed.
When you attempt to send any signal to nginx like
nginx -s quit;
nginx -s stop;
nginx -s reload;
nginx uses this file to reference the ID of the process to which it needs to send the signal. If the file isn't there the link between the active running process of nginx & the cli app is effectively broken.
I actually ended up in a state where two nginx processes were running simultaneously so killed both.
To work around this, you can either Force the termination of existing nginx processes via Activity Monitor (then run nginx & have the cli app create a new nginx.pid file) or if you REALLY need to keep nginx running but want to run nginx -s reload - manually create a file in the /run path called nginx.pid and insert the PID of the currently running nginx processs (obtained via Activity Monitor).
To reload the custom config file use
nginx -s reload -c /etc/nginx/conf.d/<config file>.conf
This could simply mean that nginx is already stopped - not running at the moment.
First, confirm whether nginx is running, execute:
$ ps aux | grep nginx
i got the same error link you, i tried many way to fix it but it not working
after that i run the command line and it work well:
nginx -c /usr/local/etc/nginx/nginx.conf
the information i got from here
https://blog.csdn.net/wn1245343496/article/details/77974756
One way to stop or reload is through the below command,
For stop:
sudo /usr/local/nginx/sbin/nginx -s stop
Run reload only if the nginx is running:
sudo /usr/local/nginx/sbin/nginx -s reload
By doing like the above, you wont get nginx: [error] open() "/usr/local/var/run/nginx.pid" this issue