I have written a my own unit template:
/etc/systemd/user/my_service#.service
This is the base for a bunch of services I want to come up at boot. I do want to create each service as a symlink to the template something like this:
$ cd /etc/systemd/user
$ sudo ln -s my_service#.service my_service#runner001.service
$ sudo ln -s my_service#.service my_service#runner002.service
...
But if I now try to enable the services I get:
$ sudo systemctl enable my_service#runner001.service
Failed to execute operation: Too many levels of symbolic links
To solve this I copied the file instead:
$ cd /etc/systemd/user
$ sudo cp my_service#.service my_service#runner001.service
$ sudo cp my_service#.service my_service#runner002.service
...
Now systemctl enable works as expected.
But: Making all those copies and keeping them synced when the template is changed are awkward and sort of defeats the purpose of a templating system I think.
What am I missing here? Could the runners be enabled without copying the template?
I'm using Ubuntu 16.04 and systemd=229
Btw: If I after enabling of the service replace it with a symlink it will still work for some systemctl commands (daemon-reload, start, stop and status) while is-enabled, enable and disable will all fail with:
Failed to execute operation: Too many levels of symbolic links
Strange I think, and I have not tried to reboot my system to see if that works...
You don't need symlinks. Just use
systemctl start/enable my_service#runner001.service. Then, systemd will use your template directly.
See https://www.freedesktop.org/software/systemd/man/systemd.unit.html#Specifiers for more info about specifiers (ie, variables you can use inside you template)
If you want more, paste template and expected result.
Related
I am running a bash script with sudo and have tried the below but am getting the error below using aws cp. I think the problem is that the script is looking for the config in /root which does not exist. However doesn't the -E preserve the original location? Is there an option that can be used with aws cp to pass the location of the config. Thank you :).
sudo -E bash /path/to/.sh
- inside of this script is `aws cp`
Error
The config profile (name) could not be found
I have also tried `export` the name profile and `source` the path to the `config`
You can use the original user like :
sudo -u $SUDO_USER aws cp ...
You could also run the script using source instead of bash -- using source will cause the script to run in the same shell as your open terminal window, which will keep the same env together (such as user) - though honestly, #Philippe answer is the better, more correct one.
I am using the Bash on Ubuntu on Windows, the way to run bash on Windows 10. I have the Creators update installed and the Ubuntu version is 16.04.
I was playing recently with things as npm, node.js and Docker and for docker I found it is possible to install it and run it in windows and just use the client part from bash, calling directly the docker.exe file from Windows's Program Files files folder. I just update my path variable to include the path to docker as PATH=$PATH:~/mnt/e/Program\ Files/Docker/ (put in .bashrc) and then I am able to run docker from bash calling docker.exe.
But hey this bash and I dont want to write .exe at the end of the commands (programs). I can simply add an alias alias docker="docker.exe", but then I want to use lets say docker-compose and I have to add another one. I was thinking about adding a script to .bashrc that would go over path variable and search for .exe files in every path specified in the path variable and add an alias for every occurance, but it does not seem to be a very clean solution (but I guess it would serve its purpose quite well).
Is there a simple and clean solution to achieve this?
I've faced the same problem when trying to use Docker for Windows from WSL.
Had plenty of existing shell scripts that run fine under Linux and mostly under WSL too until failing due to docker: command not found. Changing everywhere docker to docker.exe would be too cumbersome and non-portable.
Tried workaround with aliases in ~/.bashrc as here at first:
shopt -s expand_aliases
alias docker=docker.exe
alias docker-compose=docker-compose.exe
But it requires every script to be run in interactive mode and still doesn't work within backticks without script modification.
Then tried exported bash functions in ~/.bashrc:
docker() { docker.exe "$#"; }
export -f docker
docker-compose() { docker-compose.exe "$#"; }
export -f docker-compose
This works. But it's still too tedious to add every needed exe.
Finally ended up with easier symlinks approach and a modified wslshim custom helper script.
Just add once to ~/.local/bin/wslshim:
#!/bin/bash -x
cd ~/.local/bin && ln -s "`which $1.exe`" "$1" || ln -s "`which $1.ps1`" "$1" || ln -s "`which $1.cmd`" "$1" || ln -s "`which $1.bat`" "$1"
Make it executable: chmod +x ~/.local/bin/wslshim
Then adding any "alias" becomes as easy as typing two words:
$ wslshim docker
+ cd ~/.local/bin
++ which docker.exe
+ ln -s '/mnt/c/Program Files/Docker/Docker/resources/bin/docker.exe' docker
$ wslshim winrm
+ cd ~/.local/bin
++ which winrm.exe
+ ln -s '' winrm
ln: failed to create symbolic link 'winrm' -> '': No such file or directory
++ which winrm.ps1
+ ln -s '' winrm
ln: failed to create symbolic link 'winrm' -> '': No such file or directory
++ which winrm.cmd
+ ln -s /mnt/c/Windows/System32/winrm.cmd winrm
The script auto picks up an absolute path to any windows executable in $PATH and symlinks it without extension into ~/.local/bin which also resides in $PATH on WSL.
This approach can be easily extended further to auto link any exe in a given directory if needed. But linking the whole $PATH would be an overkill. )
You should be able to simply set the executable directory to your PATH. Use export to persist.
Command:
export PATH=$PATH:/path/to/directory/executable/is/located/in
In my windows 10 the solution was to install git-bash and docker for windows.
in this bash, when I press "docker" it works
for example "docker ps"
I didnt need to make an alias or change the path.
you can download git-bash from https://git-scm.com/download/win
then from Start button, search "git bash".
Hope this solution good for you
Is there any easy way to convert a Dockerfile to a Bash script in order to install all the software on a real OS? The reason is that docker container I can not change and I would like afterwards change few things if they did not work out.
In short - no.
By parsing the Dockerfile with a tool such as dockerfile-parse you could run the individual RUN commands, but this would not replicate the Dockerfile's output.
You would have to be running the same version of the same OS.
The ADD and COPY commands affect the filesystem, which is in its own namespace. Running these outside of the container could potentially break your host system. Your host will also have files in places that the container image would not.
VOLUME mounts will also affect the filesytem.
The FROM image (which may in turn be descended from other images) may have other applications installed.
Writing Dockerfiles can be a slow process if there is a large installation or download step. To mitigate that, try adding new packages as a new RUN command (to take advantage of the cache) and add features incrementally, only optimising/compressing the layers when the functionality is complete.
You may also want to use something like ServerSpec to get a TDD approach to your container images and prevent regressions during development.
Best practice docs here, gotchas and the original article.
Basically you can make a copy of a Docker container's file system using “docker export”, which you can then write to a loop device:
docker build -t <YOUR-IMAGE> ...
docker create --name=<YOUR-CONTAINER> <YOUR-IMAGE>
dd if=/dev/zero of=disk.img bs=1 count=0 seek=1G
mkfs.ext2 -F disk.img
sudo mount -o loop disk.img /mnt
docker export <YOUR-CONTAINER> | sudo tar x -C /mnt
sudo umount /mnt
Convert a Docker container to a raw file system image.
More info here:
http://mr.gy/blog/build-vm-image-with-docker.html
You can of course convert a Dockerfile to bash script commands. Its just a matter of determining what the translation means. All docker installs, apply changes to a "file system layer" and that means all changes can be implemented in a real OS.
An example of this process is here:
https://github.com/thatkevin/dockerfile-to-shell-script
It is an example of how you would do the translation.
you can install application inside dockerfile like this
FROM <base>
RUN apt-get update -y
RUN apt-get install <some application> -y
I am wanting to detect in a shell script if a command I am going to run via sudo can in fact run via sudo. On newer versions of sudo I can do sudo -l "command" and this gives me exactly the result I want.
However, some of the systems have an old version of sudo in which -l "Command" isn't available. Another way I was thinking about doing it was to just try running the command then see if sudo prompted for the password. However, I do not see an easy way to do this as sudo writes the password prompt to the TTY and not via stdout.
Does anyone else know of a straight forward way to do this?
I should also mention "expect" doesn't seem to be available on the systems with the older sudo revisions, either.
Just for reference the "difficult" version of sudo appears to version 1.6.8
On Linux, on (at least) Debian-like systems, you can have a look at /etc/sudoers (and the optional /etc/sudoers.d/* files, if created, and included in the main /etc/sudoers) that give (among others)
the search path to where (which dir) a command can be issued
the sudo user (root) privileges
groups who can use sudo and their privileges
This is the sudoers man page for more information.
if you're only wanting to check that a password is required to run a command then you should be able to run:
$ sudo -n <command>
E.g.
$ sudo -n echo
sudo: sorry, a password is required to run sudo
I'm installing a lighttpd server on a remote machine using a bash script. After installation, I need to configure the port for the server. The system says I don't have permission to modify the file /etc/lighttpd/lighttpd.conf even though I do
sudo echo "server.bind=2000" >> /etc/lighttpd/lighttpd.conf
How shall I modify this?
What you're doing is running echo as root, then trying to append its output to the config file as the normal user.
What you want is sudo sh -c 'echo "server.bind=2000" >> /etc/lighttpd/lighttpd.conf'
Try to change the file permission using chmod
$ sudo chmod a+x /etc/lighttpd/lighttpd.conf
If you don't have the right to change the file /etc/lighttpd/lighttpd.conf check the man page of lighthttpd. If you can start it with a different config file, then create a config file somewhere and start lighthttpd with it.
The problem is that the bit on the right of >> is not run under sudo. Either use sudo -i to bring up a root shell a superuser and run the command, or just use an editor as mentioned before.