Centos 7
The problem is this: If I write in the console:
/opt/dklab_realplexor/dklab_realplexor.int start
Everything is OK, the service is started. If you create a service file and do this:
systemctl start realplexor
that service falls into the error (in this case does not write or where for some reason) and do not create PID file. Tell me, because of what it can be? Inside dklab_realplexor.int run perl script with parameters.
cd $CWD && $BIN $CONF -p $PIDFILE 2>&1 | logger -p `eval" echo $LOGPRI "` -t `eval" echo $LOGTAG "` &
Full service file:
[Unit]
Description=realplexor
[Service]
Type=forking
PIDFile=/var/run/dklab_realplexor_dklab_realplexor.conf.pid
WorkingDirectory=/opt/dklab_realplexor
User=root
Group=root
Environment=RACK_ENV=production
OOMScoreAdjust=-1000
ExecStart=/opt/dklab_realplexor/dklab_realplexor.int start
ExecStop=/opt/dklab_realplexor/dklab_realplexor.int stop
ExecReload=/opt/dklab_realplexor/dklab_realplexor.int reload
TimeoutSec=5000
[Install]
WantedBy=multi-user.target
Tell me, please, in what side to dig?
It looks like your script uses environment variables which are not defined in your service file.
To see list available environment variables add to your script (dklab_realplexor.int):
env | sort
Related
I am trying to reference a user directory on boot for my raspberry pi 4 32 bit bullseye desktop with a service setup on systemctl. This service maps my ip address to an .env file. The process is as follows:
setup.service attempts to create a .env file using setup.sh with my current ip address
setup.timer fires off the service 1 minute after boot
This works flawlessly when my setup.sh looks like this:
...
destdir=/home/me/my_directory/.env
echo "REACT_APP_MACHINE_HOST_IP=$REACT_APP_MACHINE_HOST_IP" > "$destdir"
But when I try to replace the value of "me" with the machine user in order to make this more transferrable I get this error on boot:
...
destdir=/home/"$USER"/my_directory/.env
echo "REACT_APP_MACHINE_HOST_IP=$REACT_APP_MACHINE_HOST_IP" > "$destdir"
/home//my_directory/.env does not exist...
Not sure where I'm off because I made a quick test.sh script and the following echoed the directory correctly:
#!/bin/bash
mydir=/home/"$USER"/my_directory/
echo $mydir
It seems like no user is recognized. I am assuming this has something to do with system running on root? Or is my syntax off?
EDIT:
My setup.service below:
[Unit]
Description=setup mcw
After=multi-user.target
[Service]
Type=oneshot
ExecStart=/usr/local/bin/setup.sh
[Install]
WantedBy=multi-user.target
And what I run to automate aka automate.sh:
# copy ip file to task and authorize
sudo cp /home/${USER}/my_directory/setup.sh /usr/local/bin/setup.sh
sudo chmod 744 /usr/local/bin/setup.sh
# create setup files
sudo cp /home/${USER}/my_directory/setup.service /etc/systemd/system/setup.service
sudo cp /home/${USER}/my_directory/setup.timer /etc/systemd/system/setup.timer
# add permissions for the service
sudo chmod 644 /etc/systemd/system/setup.service
# setup and reload systemctl
sudo systemctl daemon-reload
sudo systemctl enable setup.timer
sudo systemctl start setup.timer
I'm trying to run meilisearch on an ec2 instance, but having a lot of trouble getting it to start automatically and was wondering if somebody might be able to assist.
https://docs.meilisearch.com/guides/advanced_guides/installation.html#usage
My current user data is the following, but I'm not setting the env variable be set and the process is then starting up listening on a different port. Is there another way to set env variables in an ec2 start up script? Or is there something else I'm doing wrong?
#!/bin/bash
export MEILI_HTTP_ADDR="0.0.0.0:80"
curl -L https://install.meilisearch.com | sh
# Write systemd unit file
cat << EOF > /etc/systemd/system/meilisearch#ecs-agent.service
[Unit]
Description=Meilisearch Service %I
[Service]
Restart=always
ExecStart=/meilisearch
[Install]
WantedBy=default.target
EOF
systemctl enable meilisearch#ecs-agent.service
systemctl start meilisearch#ecs-agent.service
I think that the env variable that you export on line 2 of your script is not being used by Systemd.
Instead, you should provide the env variable in the service file like this:
[Service]
Restart=always
ExecStart=/meilisearch
Environment=MEILI_HTTP_ADDR=0.0.0.0:80
Please let me know if that solves your problem :)
I am currently building a docker project for running a Minecraft Spigot server.
To achieve this I need to be able to run commands in the running shell (when using docker run -it d3strukt0r/spigot) and indirectly with docker exec <name> console <command>. Unfortunately, I'm not too fond of the bash language.
Currently, I am able to send commands indirectly, which is great when being detached. I got this with:
_console_input="/app/input.buffer"
# Clear console buffers
true >$_console_input
# Start the main application
echo "[....] Starting Minecraft server..."
tail -f $_console_input | tee /dev/console | $(command -v java) $JAVA_OPTIONS -jar /app/spigot.jar --nogui "$#"
And when running the console command, all it does is the following:
echo "$#" >>/app/input.buffer
The code can be found here
Does someone know a way of how to be able to now add the functionality to directly enter commands?
USE CASE ONE: A user may run attached using docker run
docker run -it --name spigot -p 25565:25565 -e EULA=true d3strukt0r/spigot:nightly
In this case, the user should definitely be able to use the console as he is used to (when running java -jar spigot.jar).
If he has a second console open he can also send a command with:
docker exec spigot console "time set day"
USE CASE TWO: A user may run detached using docker run -d
docker run -d --name spigot -p 25565:25565 -e EULA=true d3strukt0r/spigot:nightly
In this case, the user is only able to send commands indirectly.
docker exec spigot console "time set day"
USE CASE THREE AND FOUR: Use docker-compose (look at the use case "two", it's basically the same)
You could make a script that acts like a mini-shell, reading from stdin and writing to /app/input.buffer. Set it as the container's CMD so it runs by default. Put it in the same directory as your Dockerfile and make sure it's executable.
interactive_console
#!/bin/sh
while IFS= read -rp '$ ' command; do
printf '%s\n' "$command"
done >> /app/input.buffer
Dockerfile
COPY interactive_console /usr/bin
CMD interactive_console
I have docker container starting with command:
"CMD [\"/bin/bash\", \"/usr/bin/gen_new_key.sh\"]"
script looks like:
#!/bin/bash
/usr/bin/generate_signing_key -k xxxx -r eu-west-2 > /usr/local/nginx/s3_signature_key.txt
{ read -r val1
read -r val2
sed -i "s!AWS_SIGNING_KEY!'$val1'!;
s!AWS_KEY_SCOPE!'$val2'!;
" /etc/nginx/nginx.conf
} < /usr/local/nginx/s3_signature_key.txt
if [ -z "$(pgrep nginx)" ]
then
nginx -c /etc/nginx/nginx.conf
else
nginx -s reload
fi
Script is working itself as I can see all data in docker layer in /var/lib/docker..
It is intended to run it by cron for every 5 days as AWS signature key generated in first line is valid for 7 days only. How can I prevent Docker to quit after script is finished and keep is running?
You want a container that is always ON with nginx, and run that script every 5 days.
First you can just run nginx using:
CMD ["nginx", "-g", "daemon off;"]
This way, the container is always ON with nginx running.
Then, just run your script as a usual script with the cron:
chmod +x script.sh
0 0 */5 * * script.sh
EDIT: since the script must be running in the first time
1) one solution (the pretty one), it's to load manually the AWS valid signing key the first time. After that first time, the script will update the AWS valid signing key automatically. (using the solution previously presented)
2) the other solution, it's to run a docker entrypoint file (that it's your script)
# Your script
COPY docker-entrypoint.sh /usr/local/bin/
RUN ["chmod", "+x", "/usr/local/bin/docker-entrypoint.sh"]
ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"]
# Define default command.
CMD ["/bin/bash"]
On your script:
service nginx start
echo "Nginx is running."
#This line will prevent the container from turning off
exec "$#";
+ info about reason and solution to use the exec line
I have created a docker container to stand up Elasticsearch. Elasticsearch is being started and managed by supervisor which is also installed on my docker container. I have created an entrypoint.sh script and added the following to the end of my Dockerfile
ENTRYPOINT ["/usr/local/startup/entrypoint.sh"]
My entrypoint.sh script looks as follows:
#!/bin/bash -x
# Start Supervisor if not already running
if ! ps aux | grep -q "[s]upervisor"; then
echo "Starting supervisor service"
exec/usr/bin/supervisord -nc /etc/supervisor/supervisord.conf
else
echo "Supervisor is currently running"
fi
echo "creating /.es_created"
touch /.es_created
exec "$#"
When I start my docker container supervisor starts and in turn will successfully start elasticsearch. The problem is that it never executes the last bit of the script creating the .es_created file. It seems like once the
exec /usr/bin/supervisord -nc /etc/supervisor/supervisord.conf
command is executed, it just stops there. I added -x to the #!/bin/bash so I could call docker logs on the container and it confirms that it never calls the last echo and touch commands. I feel like I may be missing something about entrypoint scripts which is why this is happening, but ultimately I want to be able to execute some commands after elasticsearch has started so I can configure a proper index and insert some data.
Your guess
It seems like once the
exec /usr/bin/supervisord -nc /etc/supervisor/supervisord.conf
command is executed, it just stops there.
is correct, because the exec command of bash has indeed the following semantics: the specified program at stake is executed, and replace the parent shell process (it is an exec system call).
So your question is actually not a Docker issue, it is rather related to Bash. For more details on the exec shell builtin, you could for example take a look at this askubuntu question, or read the corresponding doc in the bash reference manual.
To sum up, you should try to just write
/usr/bin/supervisord -nc /etc/supervisor/supervisord.conf
If that command indeed runs in the background, it should be OK. Otherwise, you could of course append a &:
/usr/bin/supervisord -nc /etc/supervisor/supervisord.conf &