I'd like to run a script in sudo mode as my consul watch handler, I can run it with command
consul watch -type key -key mykey sudo -u myaccount /scripts/myscript.sh
But I don't know how to define it in json configuration, I've tried below but it does not works
{
"watches":[{
"type":"key",
"key":"mykey",
"handler_type":"script",
"args":["sh","-c","sudo","-u","myaccount","/scripts/myscript.sh"]
}]
}
I am using consul 1.5.2, this is the error:
[ERR] agent: Failed to run watch handler '[sh -c sudo -u myaccount /scripts/myscript.sh]': exit status 1
Can anyone tell me what's wrong with my json configuration?
I moved the sh -c
I got it to work with:
"watches":[{
"type":"key",
"key":"mykey",
"handler_type":"script",
"args":["/bin/sudo","-u","consul","/bin/sh","-c","/home/testscript.sh"]
}]
The -c requires the script to be executable. Also you need the correct sudo privileges. You might even remove the sh -c altogether when the script is executable
Related
I have built a Docker Cron Environment to run Cronjobs based on alseambusher/crontab-ui using alpine:3.15.3 & it works great.
For it to work I have had to install a number of things via the Dockerfile, editing it & adding python so it could run a python script, perl for another service, openssl so I could use a Self-signed certificate, etc.
As it stands the Container is a lot bigger, which is fine, but if I am to share the container others won't necessarily want or need the services I have added & will likely need other that I haven't.
I would like to be able to add a command in the ENV of a Docker Compose to add services at startup without having to do a full build each time. I'm sure it would be simpler to add build:>args: & have it rebuild the container each startup, but my goal is to have it add to an image only the services that each user needs & declares in the Docker-Compose with no need to have the files for the build on the system.
I know this will mean a longer startup depending on the services, I'm okay with that.
I know it's normal to run cron on the host & have it call into containers, but cron on Windows WSL has to be manually started every time the WSL starts & is easy to forget about & can't really be automated aside from on startup, & I'd like to do this entirely inside Docker.
How can I add an ENV like SERVICE_INSTALL to have it run in BASH (which is already added in the Dockerfile & present at /bin/bash) at container startup?
Ideally I'd like to be able to add multiple SERVICE_INSTALL lines if at all possible.
Example:
SERVICE_INSTALL1='apk add --update --no-cache python3 && ln -sf python3 /usr/bin/python'
SERVICE_INSTALL2='python3 -m ensurepip'
SERVICE_INSTALL3='apk add --no-cache perl perl-html-parser perl-http-cookies perl-lwp-useragent-determined perl-json perl-json-xs'
Or, if nothing else:
SERVICE_INSTALL=apk add --update --no-cache python3 && ln -sf python3 /usr/bin/python && perl perl-html-parser perl-http-cookies perl-lwp-useragent-determined perl-json perl-json-xs && && wget && curl && nodejs && npm
but then that leaves the problem of installing things through pip or npm.
I have tried adding a command: to the Docker-Compose but every variation I have tried does not work. I'm also concerned with this method as from my understanding a command: replaces the startup script in the container, not adds to it, so that is not ideal, regardless, it doesn't seem like an install command: is possible anyway
I have tried: (Each as a single command: not together)
command:
- BASH apk --update add openssl
- /bin/bash apk --update add openssl
- BASH RUN apk --update add openssl
- /bin/bash RUN apk --update add openssl
- sh apk --update add openssl
- /bin/sh apk --update add openssl
- apk --update add openssl
Each ends with a message along the lines of Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "/bin/bash run apk --update add openssl": stat /bin/bash run apk --update add openssl: no such file or directory: unknown
UPDATE: I discovered a few things trying to get this to work
for command: to work there needs to not be any - before it
anything, even on multiple lines, is considered a single command essentially as though they were all on the same line & have to be separated with an &&
it will repeat the command or show the error of it failing to execute the command & not continue to next until it is completed.
for example the command mkdir -p /test leaves no logs, but the container never actually starts. While portainer says it's running trying to bash into it gives a is restarting, wait until the container is running message
mkdir "-p /test" repeats this message
mkdir: unrecognized option:
BusyBox v1.34.1 (2022-02-02 18:21:20 UTC) multi-call binary.
Usage: mkdir [-m MODE] [-p] DIRECTORY...
Create DIRECTORY
-m MODE Mode
-p No error if exists; make parent directories as needed
3 times 3-4 seconds apart, them 7 seconds, then 8 seconds, then 15 seconds, 27 seconds, 53 seconds, then hits a minute & continues to grow a few seconds each try.
It also returns the same wait for the container to be running message when trying to bash in
mkdir -p "/test" seems to be the correct formatting, it appears to work but leaves no logs & when attempting to bash in it connects, shows the terminal, then exits, attempting to reconnect shows the same container is restarting message, likely because the container stopped once the command was finished & is set to restart: always. commenting out the restart command the container exits.
mkdir -p "/test" followed by a new line with supervisord -c /etc/supervisord.conf (the default start command) has mkdir reporting mkdir: unrecognized option: c
adding "supervisord -c /etc/supervisord.conf" leaves no logs & a restarting container.
reversing the order, with supervisord -c /etc/supervisord.conf 1st has supervisord reporting the error Error: positional arguments are not supported: ['mkdir', '-p', '/test'] For help, use /usr/bin/supervisord -h
bash -c "supervisord -c /etc/supervisord.conf with a new line & && mkdir -p /test with a new line & && mkdir -p /test2" runs with a working container, but no directories created
reversing the order seems to work & creates the directories, with a running container
command:
bash -c "mkdir -p /test
&& mkdir -p /test2
&& supervisord -c /etc/supervisord.conf"
Which indicates that it will run them in order, but only proceeds to the next after the one finishes.
a test confirmed that the same can be done with other dependencies so long as the initial startup is last. I'd rather have the container start 1st, then install the dependencies while it is running as they are not required for the container itself to run, but rather are added for use in the cronjobs that will be running on a schedule, so if the container starts & the dependencies cannot be used for the 1st 2, 3, even 5 or 10 minutes that might only affect their 1st attempt if it happens to be in that time.
This is alright, I now understand better how the command: option works, but it still requires users to know & properly include the default start command. The command: options are also a lot more particular & easy to get wrong, while ENV variables are something every docker user knows, has experience with, & is simpler to implement
Currently trying to make an init container on rancher which will send a curl to one of my services. I am having repeated issues trying to get this work and I cannot pinpoint why. I am certain my yaml format is correct and I am installing busy box so curl should be available for use
You are missing the -c option for the shell to tell it that it should read commands from the command line instead from a file:
sh -c curl -X POST ...
So you have to put -c as first container arg:
...
- args:
- -c
- curl
- -X
...
I'm trying to deploy a django app (dev mode) using chef. The problem is, when execute the recipe the server doesn't kept alive.The command works when I log in, but because it doesn't change the session. Any suggestions are helpful.
execute 'django_run' do
user 'root'
cwd '/var/www/my-app/'
command 'source ./.venv/bin/activate && sudo -E nohup python2 ./manage.py runserver 0.0.0.0:8000 > /dev/null 2>&1 &'
end
I suspect some weirdness with sudo and & is at-play here. Try to use sudo -b instead of ampersand. Also a better way to do this may be to use the service chef resource instead of execute:
https://docs.chef.io/resources/service/
I was trying to automate our form compilation process.
I created a .bat file that will do the following:
Login user to host
sudo -S -u oracle bash -c 'bash /frmcmp_batch.sh'
But when I try to run frmcmp_batch.sh, I'm getting the error:
FRM-91500: Unable to start/complete the build error.
It is because your display isn't right. You get this to work by setting the following commands:
export TERM=vt220
export ORACLE_TERM=vt220
Running postgreSQL 9.4.5_2 currently
I have tried
pg_ctl stop -W -t 1 -s -D /usr/local/var/postgres -m f
Normally no news is good news but after I will run
pg_ctl status -D /usr/local/var/postgres
and get pg_ctl: server is running (PID: 536)
I have also tried
pg_ctl restart -w -D /usr/local/var/postgres -c -m i
Response message is:
waiting for server to shut down.......................... failed
pg_ctl: server does not shut down
I've also checked my /Library/LaunchDaemons/ to see why the service is starting at login but no luck so far. Anyone have any ideas on where I should check next? Force quit in the activity monitor also isn't helping me any.
Sadly none of the previous answers help me, it worked for me with:
brew services stop postgresql
Cheers
I tried various options; finally, the below command worked.
sudo -u postgres ./pg_ctl -D /your/data/directory/path stop
example
sudo -u postgres ./pg_ctl -D /Library/PostgreSQL/11/data stop
As per the comments, the recommended command is without the ./ when calling pg_ctl:
sudo -u postgres pg_ctl -D /Library/PostgreSQL/11/data stop
Tried sudo and su but no such luck.
Just found this gui
https://github.com/MaccaTech/postgresql-mac-preferences
If anyone can help with the terminal commands that would be very much appreciated, but till then the gui will get the job done.
Had the same issue, I had installed postgres locally and wanted to wrap in a docker container instead.
I solved it pretty radically by 1) uninstalling postgres 2) kill the leftover process on postgres port. If you don't un-install the process restarts and grabs the port again - look at your Brewfile form brew bundle dump to check for a restart_service: true flag.
I reasoned that, as I am using containers, I should not need the local one anyway, but !! attention this will remove postgres from your system.
brew uninstall postgres
...
lsof -i :5432 # this to find the PID for the process
kill - 9 <the PID you found at previous command>
Note: if you still want to used psql you can brew install libpq, and add psql to your PATH (the command output shows you what to add to your .zshrc, or similar)
you can stop the server using this command
{pg_ctl -D /usr/local/var/postgres stop -s -m fast}
Adding onto the solutions already stated :
if you decide to use the pg_ctl command, ensure that you are executing the command as a user with the permissions to access the databases/database server.
this means :
the current logged in user on your terminal should have those permissions
or
first run :
$ sudo su <name_of_database_user>
pg_ctl -D /Library/PostgreSQL/<version_here>/data/ stop
the same goes for the start command.
credit : https://gist.github.com/kingbin/9435292
(essentially hosted a file with the commands on github, saved me some time :^) )
I had a stray docker container running Postgres that I had forgotten about.