stdin is not a tty when try to run multiple commands using '&' operator in shell script - bash

I was working with a microservice project, so I needed to run all services at once so I set up bash script but throw stdin is not a tty error and run only the last line of command
yarn --cwd /d/offic_work/server/customer/ start:dev &
yarn --cwd /d/offic_work/server/admin start:dev &
yarn --cwd /d/offic_work/server/orders/ start:dev &
yarn --cwd /d/offic_work/server/product start:dev

I'm not particularly familiar with yarn, but "is not a tty" means it is seeking input from the user, and can't get any because you ran it in the background. So what you need to do is run it in the foreground, find out what input it is seeking, then figure out what command line arguments, or otherwise configuration will let it run without user intervention. Then when you know that you can alter your script or take appropriate action so that it can run in the background.
In some cases, programs expect a user confirmation "y" to some question. That's why in unix the "yes" command exists, which outputs endless "y"s for such a purpose. You could also try piping yes to your command:
yes | yarn --cwd /d/offic_work/server/customer/ start:dev &

first
Create a list of your services i.e. list.txt
customer
admin
orders
product
second
Run them in parallel with xargs -P 0
# dry-run - test
xargs -I SERVICE -P 0 echo "yarn --cwd /d/offic_work/server/SERVICE start:dev" < list.txt
# run - remove the echo
xargs -I SERVICE -P 0 yarn --cwd /d/offic_work/server/SERVICE start:dev < list.txt

Related

Docker: How to ADD a service via ENV variables?

I have built a Docker Cron Environment to run Cronjobs based on alseambusher/crontab-ui using alpine:3.15.3 & it works great.
For it to work I have had to install a number of things via the Dockerfile, editing it & adding python so it could run a python script, perl for another service, openssl so I could use a Self-signed certificate, etc.
As it stands the Container is a lot bigger, which is fine, but if I am to share the container others won't necessarily want or need the services I have added & will likely need other that I haven't.
I would like to be able to add a command in the ENV of a Docker Compose to add services at startup without having to do a full build each time. I'm sure it would be simpler to add build:>args: & have it rebuild the container each startup, but my goal is to have it add to an image only the services that each user needs & declares in the Docker-Compose with no need to have the files for the build on the system.
I know this will mean a longer startup depending on the services, I'm okay with that.
I know it's normal to run cron on the host & have it call into containers, but cron on Windows WSL has to be manually started every time the WSL starts & is easy to forget about & can't really be automated aside from on startup, & I'd like to do this entirely inside Docker.
How can I add an ENV like SERVICE_INSTALL to have it run in BASH (which is already added in the Dockerfile & present at /bin/bash) at container startup?
Ideally I'd like to be able to add multiple SERVICE_INSTALL lines if at all possible.
Example:
SERVICE_INSTALL1='apk add --update --no-cache python3 && ln -sf python3 /usr/bin/python'
SERVICE_INSTALL2='python3 -m ensurepip'
SERVICE_INSTALL3='apk add --no-cache perl perl-html-parser perl-http-cookies perl-lwp-useragent-determined perl-json perl-json-xs'
Or, if nothing else:
SERVICE_INSTALL=apk add --update --no-cache python3 && ln -sf python3 /usr/bin/python && perl perl-html-parser perl-http-cookies perl-lwp-useragent-determined perl-json perl-json-xs && && wget && curl && nodejs && npm
but then that leaves the problem of installing things through pip or npm.
I have tried adding a command: to the Docker-Compose but every variation I have tried does not work. I'm also concerned with this method as from my understanding a command: replaces the startup script in the container, not adds to it, so that is not ideal, regardless, it doesn't seem like an install command: is possible anyway
I have tried: (Each as a single command: not together)
command:
- BASH apk --update add openssl
- /bin/bash apk --update add openssl
- BASH RUN apk --update add openssl
- /bin/bash RUN apk --update add openssl
- sh apk --update add openssl
- /bin/sh apk --update add openssl
- apk --update add openssl
Each ends with a message along the lines of Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "/bin/bash run apk --update add openssl": stat /bin/bash run apk --update add openssl: no such file or directory: unknown
UPDATE: I discovered a few things trying to get this to work
for command: to work there needs to not be any - before it
anything, even on multiple lines, is considered a single command essentially as though they were all on the same line & have to be separated with an &&
it will repeat the command or show the error of it failing to execute the command & not continue to next until it is completed.
for example the command mkdir -p /test leaves no logs, but the container never actually starts. While portainer says it's running trying to bash into it gives a is restarting, wait until the container is running message
mkdir "-p /test" repeats this message
mkdir: unrecognized option:
BusyBox v1.34.1 (2022-02-02 18:21:20 UTC) multi-call binary.
Usage: mkdir [-m MODE] [-p] DIRECTORY...
Create DIRECTORY
-m MODE Mode
-p No error if exists; make parent directories as needed
3 times 3-4 seconds apart, them 7 seconds, then 8 seconds, then 15 seconds, 27 seconds, 53 seconds, then hits a minute & continues to grow a few seconds each try.
It also returns the same wait for the container to be running message when trying to bash in
mkdir -p "/test" seems to be the correct formatting, it appears to work but leaves no logs & when attempting to bash in it connects, shows the terminal, then exits, attempting to reconnect shows the same container is restarting message, likely because the container stopped once the command was finished & is set to restart: always. commenting out the restart command the container exits.
mkdir -p "/test" followed by a new line with supervisord -c /etc/supervisord.conf (the default start command) has mkdir reporting mkdir: unrecognized option: c
adding "supervisord -c /etc/supervisord.conf" leaves no logs & a restarting container.
reversing the order, with supervisord -c /etc/supervisord.conf 1st has supervisord reporting the error Error: positional arguments are not supported: ['mkdir', '-p', '/test'] For help, use /usr/bin/supervisord -h
bash -c "supervisord -c /etc/supervisord.conf with a new line & && mkdir -p /test with a new line & && mkdir -p /test2" runs with a working container, but no directories created
reversing the order seems to work & creates the directories, with a running container
command:
bash -c "mkdir -p /test
&& mkdir -p /test2
&& supervisord -c /etc/supervisord.conf"
Which indicates that it will run them in order, but only proceeds to the next after the one finishes.
a test confirmed that the same can be done with other dependencies so long as the initial startup is last. I'd rather have the container start 1st, then install the dependencies while it is running as they are not required for the container itself to run, but rather are added for use in the cronjobs that will be running on a schedule, so if the container starts & the dependencies cannot be used for the 1st 2, 3, even 5 or 10 minutes that might only affect their 1st attempt if it happens to be in that time.
This is alright, I now understand better how the command: option works, but it still requires users to know & properly include the default start command. The command: options are also a lot more particular & easy to get wrong, while ENV variables are something every docker user knows, has experience with, & is simpler to implement

Why nohup on execute resource doesn't work - Chef recipe

I'm trying to deploy a django app (dev mode) using chef. The problem is, when execute the recipe the server doesn't kept alive.The command works when I log in, but because it doesn't change the session. Any suggestions are helpful.
execute 'django_run' do
user 'root'
cwd '/var/www/my-app/'
command 'source ./.venv/bin/activate && sudo -E nohup python2 ./manage.py runserver 0.0.0.0:8000 > /dev/null 2>&1 &'
end
I suspect some weirdness with sudo and & is at-play here. Try to use sudo -b instead of ampersand. Also a better way to do this may be to use the service chef resource instead of execute:
https://docs.chef.io/resources/service/

Send commands directly in running process and indirectly (e. g. with tail)

I am currently building a docker project for running a Minecraft Spigot server.
To achieve this I need to be able to run commands in the running shell (when using docker run -it d3strukt0r/spigot) and indirectly with docker exec <name> console <command>. Unfortunately, I'm not too fond of the bash language.
Currently, I am able to send commands indirectly, which is great when being detached. I got this with:
_console_input="/app/input.buffer"
# Clear console buffers
true >$_console_input
# Start the main application
echo "[....] Starting Minecraft server..."
tail -f $_console_input | tee /dev/console | $(command -v java) $JAVA_OPTIONS -jar /app/spigot.jar --nogui "$#"
And when running the console command, all it does is the following:
echo "$#" >>/app/input.buffer
The code can be found here
Does someone know a way of how to be able to now add the functionality to directly enter commands?
USE CASE ONE: A user may run attached using docker run
docker run -it --name spigot -p 25565:25565 -e EULA=true d3strukt0r/spigot:nightly
In this case, the user should definitely be able to use the console as he is used to (when running java -jar spigot.jar).
If he has a second console open he can also send a command with:
docker exec spigot console "time set day"
USE CASE TWO: A user may run detached using docker run -d
docker run -d --name spigot -p 25565:25565 -e EULA=true d3strukt0r/spigot:nightly
In this case, the user is only able to send commands indirectly.
docker exec spigot console "time set day"
USE CASE THREE AND FOUR: Use docker-compose (look at the use case "two", it's basically the same)
You could make a script that acts like a mini-shell, reading from stdin and writing to /app/input.buffer. Set it as the container's CMD so it runs by default. Put it in the same directory as your Dockerfile and make sure it's executable.
interactive_console
#!/bin/sh
while IFS= read -rp '$ ' command; do
printf '%s\n' "$command"
done >> /app/input.buffer
Dockerfile
COPY interactive_console /usr/bin
CMD interactive_console

changing user in upstart script

I have an upstart script that does some logging tasks. The script testjob.conf looks like below:
description "Start Logging"
start on runlevel [2345]
script
sudo -u user_name echo Test Job ran at `date` >> /home/user_name/Desktop/jobs.log
end script
Then I run the script with sudo service testjob start and I get testjob stop/waiting as result. The file jobs.log is created and the logging is done. However the file is owned by root. I wanted to change this and hence added sudo -u user_name part infront of the command mentioned in this similar post.
However this doesnot seem to do the trick. Is there another way to do this ?
The log file is created by the >> indirection which runs in the context of the root shell that also starts sudo.
Try making the process that sudo starts create the file, for instance with:
sudo -u user_name sh -c 'echo Test Job ran at `date` >> /home/user_name/Desktop/jobs.log'
In this case the sh running as user_name will "execute" the >> indirection.

Run multiple commands in linux without logging as root user?

I want to write a script, i want to run from non-root user, the script contains multiple commands.
For EX:
sudo -i
hostname
df -h
I tried the same 3 commands in a script but it's logging to root user and not executing hostname and df -h command.
If you want to run command with root privileges use
sudo command,
command
sudo -i
will log you to root's shell.
If you want to run multiple commands you should use
command1 && command2
it runs command2 after command1 is sucesuflly finished.
If you need root's privileges in your script maybe you should use sudo when executing your script, and in your script check if user has necessary privileges (http://www.cyberciti.biz/tips/shell-root-user-check-script.html).

Resources