I try to run an udev rule once a mount is ready on a Vagrant box:
SUBSYSTEM=="bdi",ACTION=="add",RUN+="/usr/bin/screen -m -d bash -c 'sleep 5; cd /vagrant/; sudo -E su -c "pm2 start daemon.json" vagrant;'"
But the command isn't running properly, since the pm2 doesn't start.
When I execute /usr/bin/screen -m -d bash -c 'sleep 5; cd /vagrant/; sudo -E su -c "pm2 start daemon.json" vagrant;' manually it does work.
Any ideas?
The nested quotes are surely part of the problem, but the bigger problem is written in the udev manual:
This can only be used for very short-running foreground tasks. Running an event process for a long period of time may block all further events for this or a dependent device. Starting daemons or other long-running processes is not appropriate for udev; the forked processes, detached or not, will be unconditionally killed after the event handling has finished.
So your approach has to be changed. However, let’s suppose the command pm2 start daemon.json is appropriately short-running: your question is interesting anyway, because similar quote-nesting problems arise often. So please consider the rest of this answer as an example for the general case.
Instead of going mad with the correct escaping sequences, you can just write
RUN+="/usr/bin/screen -m -d bash -c 'sleep 5; cd /vagrant/; sudo -E -u vagrant pm2 start daemon.json"
An even simpler solution might be
RUN+="/usr/bin/screen -m -d /usr/local/bin/start_vagrant_daemon"
where /usr/local/bin/start_vagrant_daemon is executable and has the following content
#!/bin/bash
sleep 5
cd /vagrant/
sudo -E -u vagrant pm2 start daemon.json
Both solutions require setting up the correct sudo authorizations by editing /etc/sudoers or (better) writing them in a new file /etc/sudoers.d/vagrant_daemon after enabling includedir /etc/sudoers.d in /etc/sudoers.
Related
I have built a Docker Cron Environment to run Cronjobs based on alseambusher/crontab-ui using alpine:3.15.3 & it works great.
For it to work I have had to install a number of things via the Dockerfile, editing it & adding python so it could run a python script, perl for another service, openssl so I could use a Self-signed certificate, etc.
As it stands the Container is a lot bigger, which is fine, but if I am to share the container others won't necessarily want or need the services I have added & will likely need other that I haven't.
I would like to be able to add a command in the ENV of a Docker Compose to add services at startup without having to do a full build each time. I'm sure it would be simpler to add build:>args: & have it rebuild the container each startup, but my goal is to have it add to an image only the services that each user needs & declares in the Docker-Compose with no need to have the files for the build on the system.
I know this will mean a longer startup depending on the services, I'm okay with that.
I know it's normal to run cron on the host & have it call into containers, but cron on Windows WSL has to be manually started every time the WSL starts & is easy to forget about & can't really be automated aside from on startup, & I'd like to do this entirely inside Docker.
How can I add an ENV like SERVICE_INSTALL to have it run in BASH (which is already added in the Dockerfile & present at /bin/bash) at container startup?
Ideally I'd like to be able to add multiple SERVICE_INSTALL lines if at all possible.
Example:
SERVICE_INSTALL1='apk add --update --no-cache python3 && ln -sf python3 /usr/bin/python'
SERVICE_INSTALL2='python3 -m ensurepip'
SERVICE_INSTALL3='apk add --no-cache perl perl-html-parser perl-http-cookies perl-lwp-useragent-determined perl-json perl-json-xs'
Or, if nothing else:
SERVICE_INSTALL=apk add --update --no-cache python3 && ln -sf python3 /usr/bin/python && perl perl-html-parser perl-http-cookies perl-lwp-useragent-determined perl-json perl-json-xs && && wget && curl && nodejs && npm
but then that leaves the problem of installing things through pip or npm.
I have tried adding a command: to the Docker-Compose but every variation I have tried does not work. I'm also concerned with this method as from my understanding a command: replaces the startup script in the container, not adds to it, so that is not ideal, regardless, it doesn't seem like an install command: is possible anyway
I have tried: (Each as a single command: not together)
command:
- BASH apk --update add openssl
- /bin/bash apk --update add openssl
- BASH RUN apk --update add openssl
- /bin/bash RUN apk --update add openssl
- sh apk --update add openssl
- /bin/sh apk --update add openssl
- apk --update add openssl
Each ends with a message along the lines of Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "/bin/bash run apk --update add openssl": stat /bin/bash run apk --update add openssl: no such file or directory: unknown
UPDATE: I discovered a few things trying to get this to work
for command: to work there needs to not be any - before it
anything, even on multiple lines, is considered a single command essentially as though they were all on the same line & have to be separated with an &&
it will repeat the command or show the error of it failing to execute the command & not continue to next until it is completed.
for example the command mkdir -p /test leaves no logs, but the container never actually starts. While portainer says it's running trying to bash into it gives a is restarting, wait until the container is running message
mkdir "-p /test" repeats this message
mkdir: unrecognized option:
BusyBox v1.34.1 (2022-02-02 18:21:20 UTC) multi-call binary.
Usage: mkdir [-m MODE] [-p] DIRECTORY...
Create DIRECTORY
-m MODE Mode
-p No error if exists; make parent directories as needed
3 times 3-4 seconds apart, them 7 seconds, then 8 seconds, then 15 seconds, 27 seconds, 53 seconds, then hits a minute & continues to grow a few seconds each try.
It also returns the same wait for the container to be running message when trying to bash in
mkdir -p "/test" seems to be the correct formatting, it appears to work but leaves no logs & when attempting to bash in it connects, shows the terminal, then exits, attempting to reconnect shows the same container is restarting message, likely because the container stopped once the command was finished & is set to restart: always. commenting out the restart command the container exits.
mkdir -p "/test" followed by a new line with supervisord -c /etc/supervisord.conf (the default start command) has mkdir reporting mkdir: unrecognized option: c
adding "supervisord -c /etc/supervisord.conf" leaves no logs & a restarting container.
reversing the order, with supervisord -c /etc/supervisord.conf 1st has supervisord reporting the error Error: positional arguments are not supported: ['mkdir', '-p', '/test'] For help, use /usr/bin/supervisord -h
bash -c "supervisord -c /etc/supervisord.conf with a new line & && mkdir -p /test with a new line & && mkdir -p /test2" runs with a working container, but no directories created
reversing the order seems to work & creates the directories, with a running container
command:
bash -c "mkdir -p /test
&& mkdir -p /test2
&& supervisord -c /etc/supervisord.conf"
Which indicates that it will run them in order, but only proceeds to the next after the one finishes.
a test confirmed that the same can be done with other dependencies so long as the initial startup is last. I'd rather have the container start 1st, then install the dependencies while it is running as they are not required for the container itself to run, but rather are added for use in the cronjobs that will be running on a schedule, so if the container starts & the dependencies cannot be used for the 1st 2, 3, even 5 or 10 minutes that might only affect their 1st attempt if it happens to be in that time.
This is alright, I now understand better how the command: option works, but it still requires users to know & properly include the default start command. The command: options are also a lot more particular & easy to get wrong, while ENV variables are something every docker user knows, has experience with, & is simpler to implement
I have the following systemd unit file set to automatically update all Arch Linux and AUR packages at the same time (using the yay AUR helper, of course) while also attempting to temporarily add (and then delete after it’s done, for obvious reasons) a sudoers.d entry to briefly give nobody sudo access to pacman in order to get AUR packages updated:
[Unit]
Description=Automatic Update
After=network-online.target
[Service]
Type=simple
SyslogIdentifier=autoupdate
ExecStartPre=/bin/bash -c 'echo \'nobody ALL= NOPASSWD: /usr/bin/pacman\' > /etc/sudoers.d/autoupdate'
ExecStart=/bin/bash -c \”XDG_CACHE_HOME=/var/tmp PWD=/var/tmp sudo -E -u nobody yay -Syuq --noconfirm --devel --timeupdate\”
ExecStartPost=/usr/bin/rm -f /etc/sudoers.d/autoupdate
KillMode=process
KillSignal=SIGINT
[Install]
WantedBy=multi-user.target
The problem is that bash fails to acknowledge the existence of the closing single quote on the ExecStartPre line:
nobody: -c: line 1: unexpected EOF while looking for matching `’`
nobody: -c: line 2: syntax error: unexpected end of file
This is of course despite the fact that manually typing sudo bash -c ‘echo nobody ALL\=NOPASSWD: /usr/bin/pacman > /etc/sudoers.d/autoupdate’ into my shell succeeds without incident.
What could be causing this discrepancy?
Turns out the overcomplication of the issue was rooted in the use of ExecStartPost= instead of ExecStopPost=. Once I changed the former to the latter, the original version of the unit file from long before this was posted (which was far simpler) worked perfectly.
regardless of why you want to use sudo even though you are root..
and without thinking about your code..
Use a script instead
ExecStartPre=/path/to/your/script prestart
ExecStart=/path/to/your/script start
ExecStartPost=/path/to/your/script poststart
your script
#!/bin/bash
case $1 in
prestart) echo "nobody ALL= NOPASSWD: /usr/bin/pacman" > /etc/sudoers.d/autoupdate;;
start) XDG_CACHE_HOME=/var/tmp
PWD=/var/tmp
sudo -E -u nobody yay -Syuq --noconfirm --devel --timeupdate;;
poststart) rm -f /etc/sudoers.d/autoupdate;;
esac
I have created a Jenkins job today, what it does is the Jenkins user should log into another server and run two commands seperated by &&:
ssh -i /creds/jenkins jenkins#servername.com "sh -c 'sudo su && df'"
The loging part works fine, then it runs the sudo su command and becomes root but it never runs the second command (i.e. df).
I even did this manually and from the Jenkins machine logged into the other server (servername). Then ran sh -c "sudo su && df" with no luck.
Can you please help?
Thanks in advance
If you are trying to run the df command as root, you should instead do sudo df.
This is because with sudo su && df, you are basically executing sudo su first and then df.
Also make sure, your jenkins user can be sudo without password.
The sudo su launches a second shell, and the command containing the && df is waiting to be executed in the non-root shell, just after the sudo su shell exits successfully.
This could be what you're looking for:
sh -c 'sudo su - root -c "df"'
Edit: please note that I don't normally use or advocate the use of sudo su - root -c type of constructions. However, I have seen rare cases in which a program doesn't work properly when called via sudo/gksudo, but does work properly when called via su/gksu -- in such cases, a given user should try to use sudo -i first, and if that does not work, one might have to resort to sudo su - root -c or similar, as a workaround of sorts to deal with a "misbehaving" program. Since the OP used some similar syntax on his post, I assumed that his case could be such a workaround case, so I maintained the sudo su - root -c type of structure on my answer.
when you did sudo su && df , sudo su will start a child process immediately without waiting for the && df part of the command to execute , when you hit Ctrl + D it exits the child process and enters the parent shell , that's when your && df will execute. You should do this using here strings, it might not be the best option but it works and it does not start a new child process
sh -c "sudo su" <<<df
note: don't surround <<< df with any quotes
this is probably a really simple question. I apologize if it is a duplicate.
I want to know how to remove sudo permissions for one particular command. I've created a script that installs a bunch of .deb packages and it needs sudo to do that, but one command in it needs to run without sudo permissions, so how would I do that? I'm using Ubuntu and this is a bash script.
I'm calling my script: ROS_install
Here is part of the script:
sudo dpkg -i /home/forklift/Desktop/ROS/ros-hydro-laser-proc_0.1.3-0precise-20131015-2054-+0000_amd64.deb
sudo dpkg -i /home/forklift/Desktop/ROS/ros-hydro-urg-c_1.0.403-0precise-20131010-0128-+0000_amd64.deb
sudo rosdep init
sleep 2
rosdep update
The command "rosdep update" needs to be run without sudo permissions. I assumed that it was already, but I get a warning every time I run the script, and thus get locked out of the command after installation.
Rather than give the entire script elevated privileges, just give them to the actual commands that need them. That is, rather than
$ sudo my_script
modify my_script to use sudo only on those commands that need it. For instance, if this is your script:
command1
command2
command3
command4
command5
and command3 is the non-sudo command, modify your script to read
sudo command1
sudo command2
command3
sudo command4
sudo command5
In the process, think about whether command1 actually needs to run with sudo, or it it can run just as well without. In that way, you should be able to greatly reduce the number of commands that actually need to be run with sudo in your script.
If your command is running with full privileges, it also has the privilege to demote its own privileges, for good or for the duration of one command, by running su.
touch /privileged
su -c 'cp /privileged /tmp/not' nobody
I assume you are calling your script like:
sudo script.sh
And you do not want all of the commands within the script to run as root.
If your script is like:
apt-get install perl
apt-get install python
mv trash /home/user/
And you only want to run the first two commands as root you can specify a specific user for the third like:
su -c "mv trash /home/user/" user
Where user is the username you want to run the command as.
This will allow you to make a single sudo call at the parent level when you call the script.
If you don't want the username hardcoded, you can use a command like logname to get the username of the user that you are logged in as.
Just adding to the other answers, you can do this:
su -c "command" $SUDO_USER
Which will execute the command as the actual user who typed the sudo command
That's very useful when you are making scripts that require sudo to install something and write something in the user's $HOME
I have a bash script that partially needs to be running with default user rights, but there are some parts that involve using sudo (like copying stuff into system folders) I could just run the script with sudo ./script.sh, but that messes up all file access rights, if it involves creating or modifying files in the script.
So, how can I run script using sudo for some commands? Is it possible to ask for sudo password in the beginning (when the script just starts) but still run some lines of the script as a current user?
You could add this to the top of your script:
while ! echo "$PW" | sudo -S -v > /dev/null 2>&1; do
read -s -p "password: " PW
echo
done
That ensures the sudo credentials are cached for 5 minutes. Then you could run the commands that need sudo, and just those, with sudo in front.
Edit: Incorporating mklement0's suggestion from the comments, you can shorten this to:
sudo -v || exit
The original version, which I adapted from a Python snippet I have, might be useful if you want more control over the prompt or the retry logic/limit, but this shorter one is probably what works well for most cases.
Each line of your script is a command line. So, for the lines you want, you can simply put sudo in front of those lines of your script. For example:
#!/bin/sh
ls *.h
sudo cp *.h /usr/include/
echo "done" >>log
Obviously I'm just making stuff up. But, this shows that you can use sudo selectively as part of your script.
Just like using sudo interactively, you will be prompted for your user password if you haven't done so recently.