currently to start my app in production, in an ubuntu screen, I use the command:
nohup ts-node-esm --transpile-only index.ts >/dev/null 2>&1 &
if I use this command it is because I have too many configurations in tsconfig.json and package.json, with ts and other modules, I have not been able to build and make everything work. so I use this command.
I would like to have the app restart automatically after a crash, and maybe even write the error to a log file.
to automatically restart the app I tried:
nodemon
npx nodemon
ts-node-dev
forever
and others, but nothing seems to work, the --transpile-only flag is a must, I can't not use it.
once I understand how to automatically restart the app I would also like to log the error in a file.
EDIT
I managed to process this command:
npx nodemon --ext "ts,json" --exec "node --experimental-specifier-resolution=node --experimental-modules --no-warnings --loader ts-node/esm index.ts"
unfortunately the flag --transpile-only doesn't work:
node: bad option: --transpile-only
EDIT 2
the app crashes after some time, I don't understand why and it doesn't restart automatically.
I found this command, unfortunately always without the --transpile-only flag. Hope the log file will be created and will restart after a crash:
forever start -c "node --experimental-specifier-resolution=node --experimental-modules --no-warnings --loader ts-node/esm index.ts" ./ -o out.log -e err.log -l forever.log
Related
I have built a Docker Cron Environment to run Cronjobs based on alseambusher/crontab-ui using alpine:3.15.3 & it works great.
For it to work I have had to install a number of things via the Dockerfile, editing it & adding python so it could run a python script, perl for another service, openssl so I could use a Self-signed certificate, etc.
As it stands the Container is a lot bigger, which is fine, but if I am to share the container others won't necessarily want or need the services I have added & will likely need other that I haven't.
I would like to be able to add a command in the ENV of a Docker Compose to add services at startup without having to do a full build each time. I'm sure it would be simpler to add build:>args: & have it rebuild the container each startup, but my goal is to have it add to an image only the services that each user needs & declares in the Docker-Compose with no need to have the files for the build on the system.
I know this will mean a longer startup depending on the services, I'm okay with that.
I know it's normal to run cron on the host & have it call into containers, but cron on Windows WSL has to be manually started every time the WSL starts & is easy to forget about & can't really be automated aside from on startup, & I'd like to do this entirely inside Docker.
How can I add an ENV like SERVICE_INSTALL to have it run in BASH (which is already added in the Dockerfile & present at /bin/bash) at container startup?
Ideally I'd like to be able to add multiple SERVICE_INSTALL lines if at all possible.
Example:
SERVICE_INSTALL1='apk add --update --no-cache python3 && ln -sf python3 /usr/bin/python'
SERVICE_INSTALL2='python3 -m ensurepip'
SERVICE_INSTALL3='apk add --no-cache perl perl-html-parser perl-http-cookies perl-lwp-useragent-determined perl-json perl-json-xs'
Or, if nothing else:
SERVICE_INSTALL=apk add --update --no-cache python3 && ln -sf python3 /usr/bin/python && perl perl-html-parser perl-http-cookies perl-lwp-useragent-determined perl-json perl-json-xs && && wget && curl && nodejs && npm
but then that leaves the problem of installing things through pip or npm.
I have tried adding a command: to the Docker-Compose but every variation I have tried does not work. I'm also concerned with this method as from my understanding a command: replaces the startup script in the container, not adds to it, so that is not ideal, regardless, it doesn't seem like an install command: is possible anyway
I have tried: (Each as a single command: not together)
command:
- BASH apk --update add openssl
- /bin/bash apk --update add openssl
- BASH RUN apk --update add openssl
- /bin/bash RUN apk --update add openssl
- sh apk --update add openssl
- /bin/sh apk --update add openssl
- apk --update add openssl
Each ends with a message along the lines of Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "/bin/bash run apk --update add openssl": stat /bin/bash run apk --update add openssl: no such file or directory: unknown
UPDATE: I discovered a few things trying to get this to work
for command: to work there needs to not be any - before it
anything, even on multiple lines, is considered a single command essentially as though they were all on the same line & have to be separated with an &&
it will repeat the command or show the error of it failing to execute the command & not continue to next until it is completed.
for example the command mkdir -p /test leaves no logs, but the container never actually starts. While portainer says it's running trying to bash into it gives a is restarting, wait until the container is running message
mkdir "-p /test" repeats this message
mkdir: unrecognized option:
BusyBox v1.34.1 (2022-02-02 18:21:20 UTC) multi-call binary.
Usage: mkdir [-m MODE] [-p] DIRECTORY...
Create DIRECTORY
-m MODE Mode
-p No error if exists; make parent directories as needed
3 times 3-4 seconds apart, them 7 seconds, then 8 seconds, then 15 seconds, 27 seconds, 53 seconds, then hits a minute & continues to grow a few seconds each try.
It also returns the same wait for the container to be running message when trying to bash in
mkdir -p "/test" seems to be the correct formatting, it appears to work but leaves no logs & when attempting to bash in it connects, shows the terminal, then exits, attempting to reconnect shows the same container is restarting message, likely because the container stopped once the command was finished & is set to restart: always. commenting out the restart command the container exits.
mkdir -p "/test" followed by a new line with supervisord -c /etc/supervisord.conf (the default start command) has mkdir reporting mkdir: unrecognized option: c
adding "supervisord -c /etc/supervisord.conf" leaves no logs & a restarting container.
reversing the order, with supervisord -c /etc/supervisord.conf 1st has supervisord reporting the error Error: positional arguments are not supported: ['mkdir', '-p', '/test'] For help, use /usr/bin/supervisord -h
bash -c "supervisord -c /etc/supervisord.conf with a new line & && mkdir -p /test with a new line & && mkdir -p /test2" runs with a working container, but no directories created
reversing the order seems to work & creates the directories, with a running container
command:
bash -c "mkdir -p /test
&& mkdir -p /test2
&& supervisord -c /etc/supervisord.conf"
Which indicates that it will run them in order, but only proceeds to the next after the one finishes.
a test confirmed that the same can be done with other dependencies so long as the initial startup is last. I'd rather have the container start 1st, then install the dependencies while it is running as they are not required for the container itself to run, but rather are added for use in the cronjobs that will be running on a schedule, so if the container starts & the dependencies cannot be used for the 1st 2, 3, even 5 or 10 minutes that might only affect their 1st attempt if it happens to be in that time.
This is alright, I now understand better how the command: option works, but it still requires users to know & properly include the default start command. The command: options are also a lot more particular & easy to get wrong, while ENV variables are something every docker user knows, has experience with, & is simpler to implement
I am using a watch command(in a shell script) in my docker image.
Command:
watch -d -t -g ls -la ${DIR_TO_WATCH} && sleep 5 && ${COMMAND} | tee
This command is watching a directory and if there is any change in the directory structure, we perform certain actions.
I am using this docker image in my helm chart.
Now, when I deploy the chart and check the logs of that pod, my terminal breaks and it will not be user friendly anymore.
Command:
kubectl logs -f pod-name -n name-space
After this, we need to reset terminal settings to get the terminal behave normal.
Is there anything that can be done to prevent this?
Best Regards,
Akshat
Solved this by sending output of watch to /dev/null.
watch -d -t -g ls -la ${DIR_TO_WATCH} > /dev/null && sleep 5 && ${COMMAND} | tee
The reason, according to my understanding, behind broken terminal was:
Two different command's logs(logs from watch and ${COMMAND}) were showing up on the same terminal at the same time, which resulted in creating a new terminal over the default one(which I am not sure how), causing the default terminal to break.
While ${COMMAND} logs were crucial for me, I did not need to view or monitor logs from watch. Hence, I sent the log outputs of watch to /dev/null and it solved my problem.
Please correct me if my understanding or approach is wrong.
Thank you.
I can ssh onto a machine and run the following script
echo testing
docker-compose exec -T meteor php artisan down
echo done
which returns
testing
Application is now in maintenance mode.
done
However it I try and run that command over ssh it exits immediately after the docker-compose call.
ssh me#me.com << EOF
echo testing
docker-compose exec -T meteor php artisan down
echo done
EOF
gives
testing
Application is now in maintenance mode.
ie done is missing
I can get it to continue by adding && after the docker-compose command but i've got a long script and it makes it ugly and error prone if I have to explicity state this.
Any idea why this is happening and what I can change to fix it.
Update
I removed the -T from docker-compose and the script ran to completion however it gave the message the input device is not a TTY. It appears it can't allocate the interactive console. After a bit more googling I found that I can call
export COMPOSE_INTERACTIVE_NO_CLI=1
And then it will run to completion without giving error messages.
Thanks all for the help :)
The issue was being caused by the -T flag to docker-compose.
This was added because an error message was being printed if it wasn't there. the input device is not a TTY
I found you could prevent docker-compose from creating an interactive terminal if you use
export COMPOSE_INTERACTIVE_NO_CLI=1
Then the script runs correctly without the -T option.
I'd like to run a node server in background and start karma (on win7). Writing a bash script like the following (and run it with git bash) appears to work, but it reports to a separate window instead of the WebStorm terminal:
#!/bin/bash
node test/server/index.js &
karma start karma.conf.js
package.json
"scripts": {
"test": "test.sh"
},
If I try it with git bash and bash test.sh then it reports to the same window.
I tried to do something similar in npm, but it cannot run background processes.
"scripts": {
"test": "node test/server/index.js & karma start karma.conf.js"
},
No matter how I try it can run things only in a single process, so it waits for the node server to exit, and thus the karma server never starts.
Any idea how to solve the bash reporting to WebStorm terminal or the npm parallelization?
update:
I think I have found the reason: https://github.com/npm/npm/issues/8358 This seems to be a Windows related issue. On Linux it would work properly. So it is not possible to fix the npm script. I think instead of bash I'll move the karma server and the node server to a node script and create a child process for the node server to be Windows compatible. I hope that way the karma logs will show up in the WebStorm terminal.
Cross-platform shell parallelization solution
I had a little time to search more in the topic. Actually there are parallelization tools available for npm and shell scripts, which are cross-platform:
https://github.com/mysticatea/npm-run-all
https://github.com/kimmobrunfeldt/concurrently
https://github.com/royriojas/shell-executor
There was an initiative to merge all of these projects along with others, which was more or less successful: https://github.com/mysticatea/npm-run-all/issues/10. According to one of the contributors npm-run-all is great now, on the other hand the npm-run-all repo does not seem to be that active nowadays, so probably it is better to use concurrently or shell-executor instead.
WebStorm settings / Git bash solution
I set the WebStorm terminal to git bash instead of cmd.exe:
File/Settings > Tools/Terminal > Shell path: "C:\Program Files\Git\bin\bash.exe" > Ok
And I changed the npm script to run with bash:
"scripts": {
"test": "bash -c \"node test/server/index.js & karma start karma.conf.js\""
},
Hopefully the bash commands work the same on Linux too, I have to check with Travis, but there is a very good chance.
Using the bash command for the sh file works too:
"scripts": {
"test": "bash test.sh"
},
Is npm shell configuration a possible solution?
It is interesting that without using the bash command the upper solution did not work. Probably npm started it with cmd.exe and that opened bash.exe in a new window when it checked the header and realized that it is a bash script. And yes, I checked and it uses the cmd.exe by default:
$ npm config ls -l | grep shell
shell = "C:\\Windows\\system32\\cmd.exe"
So another option might be to set the npm shell to git bash and after that I don't have to use the bash in my scripts.
npm config set shell "C:\Program Files\Git\bin\bash.exe"
Well I did exactly that, but nothing changed. I still have to use bash in my scripts and the sh file still opens in a new window. It does not make a real difference, we still need the Webstorm settings to run the script with bash, so it is not a solution.
On ec-2 instance whenever I execute pm2 I get the message...
Spawning PM2 daemon with pm2_home=<home_dir>/.pm2
This occurs with pm2 info, pm2 list, pm2 -h etc.
A bare pm2 will show help.
I can get more response from sudo -i.
It seems something is stopping PM2 from demonising when non-sudo.
This solved the problem in my case:
pm2 delete 0
I had been looking at other answers like reinstalling pm2, installing a previous version, permission issues, and none of them worked or applied to my case.
I'm not positive but I think I had a buggy process that was blocking the spawn. I had used pm2 a couple of days prior when I first logged into my server but I had been running an app that kept crashing and I tried to listen on port 80 and got permission errors.
Ubuntu 18 server machine, Node 12.14.1, NPM 6.13.4, PM2 4.2.3
This usually means that pm2 is running under PID that differs from the one in your .pm2/pm2.pid
To exit from this situation try one of these:
pm2 kill
or
ps aux | grep pm2 and then kill -9 PID found in PM2 vX.X.X: God Daemon
if none of the above help:
pkill node && \
pm2 delete all && \
pm2 flush && \
kill -9 $(head -n 1 /home/$USER/.pm2/pm2.pid) && \
rm -rf /home/$USER/.pm2
After that run pm2 ls or whatever pm2 command you want. That should daemonize pm2 again with the correct PID in .pm2/pm2.pid
EDIT
Another possible reason could be any error during the pm2 init, so if the above doesn't work for you check .pm2/pm2.log for any errors and fix them
Hi I fixed this by using an older version of pm2
npm uninstall -g pm2
npm install -g pm2#3.2.2
I just set -
pm2_home=C:\Users\<Admin/Your Account>\.pm2
in environment variables.
And then
restart your the PC / server.