Files in docker container disappear - bash

I am working on creating a backup script for some Docker contains. Some really strange thing happens when I copy files from a dir inside a Docker container to a host mounted dir - The files disappear.
EDIT: I managed to simplify the example and isolate the strange phenomenon:
#!/usr/bin/env bash
docker run -it --name gen_skeleton_cont \
mailman_server \
ls /etc && \
echo "Second ls:" && \
ls /etc \
# Cleanup the gen_skeleton_cont:
docker rm -f gen_skeleton_cont
The output of running this script is:
$ sudo bash check_incon.sh
Muttrc bash.bashrc cron.monthly environment hosts.allow issue.net logcheck mke2fs.conf os-release python rc6.d services sudoers ufw
Muttrc.d bash_completion.d cron.weekly fstab hosts.deny kbd login.defs modprobe.d pam.conf python2.7 rcS.d sgml sudoers.d update-motd.d
X11 bindresvport.blacklist crontab fstab.d init kernel logrotate.conf modules pam.d python3 resolv.conf shadow supervisor upstart-xsessions
adduser.conf blkid.conf dbus-1 gai.conf init.d ld.so.cache logrotate.d mtab passwd python3.4 resolvconf shadow- sysctl.conf vim
aliases blkid.tab debconf.conf group initramfs-tools ld.so.conf lsb-release network passwd- rc.local rmt shells sysctl.d vtrgb
aliases.db ca-certificates debian_version group- inputrc ld.so.conf.d magic networks perl rc0.d rpc skel syslog-ng wgetrc
alternatives ca-certificates.conf default gshadow insserv ldap magic.mime newt postfix rc1.d rsyslog.conf ssl systemd xml
apache2 console-setup deluser.conf gshadow- insserv.conf legal mailcap nologin ppp rc2.d rsyslog.d subgid terminfo
apparmor cron.d depmod.d host.conf insserv.conf.d libaudit.conf mailcap.order nsswitch.conf profile rc3.d securetty subgid- timezone
apparmor.d cron.daily dhcp hostname iproute2 locale.alias mailman ntp.conf profile.d rc4.d security subuid ucf.conf
apt cron.hourly dpkg hosts issue localtime mime.types opt protocols rc5.d selinux subuid- udev
Second ls:
acpi ca-certificates.conf dhcp host.conf kbd lsb-release opt python3.4 screenrc sudoers w3m
adduser.conf calendar digitalocean hostname kernel ltrace.conf os-release rc0.d securetty sudoers.d wgetrc
alternatives chatscripts dpkg hosts kernel-img.conf magic pam.conf rc1.d security sysctl.conf wireshark
apm cloud environment hosts.allow landscape magic.mime pam.d rc2.d selinux sysctl.d wpa_supplicant
apparmor console-setup fish hosts.deny ldap mailcap passwd rc3.d services systemd X11
apparmor.d cron.d fonts ifplugd ld.so.cache mailcap.order passwd- rc4.d sgml terminfo xml
apport cron.daily fstab init ld.so.conf manpath.config perl rc5.d shadow timezone zsh_command_not_found
apt cron.hourly fstab.d init.d ld.so.conf.d mime.types pm rc6.d shadow- ucf.conf
at.deny cron.monthly fuse.conf initramfs-tools legal mke2fs.conf polkit-1 rc.digitalocean shells udev
bash.bashrc crontab gai.conf inputrc libaudit.conf modprobe.d popularity-contest.conf rc.local skel ufw
bash_completion cron.weekly groff insserv libnl-3 modules ppp rcS.d smi.conf updatedb.conf
bash_completion.d dbus-1 group insserv.conf locale.alias mtab profile resolvconf ssh update-manager
bindresvport.blacklist debconf.conf group- insserv.conf.d localtime nanorc profile.d resolv.conf ssl update-motd.d
blkid.conf debian_version grub.d iproute2 logcheck network protocols rmt subgid update-notifier
blkid.tab default gshadow iscsi login.defs networks python rpc subgid- upstart-xsessions
byobu deluser.conf gshadow- issue logrotate.conf newt python2.7 rsyslog.conf subuid vim
ca-certificates depmod.d hdparm.conf issue.net logrotate.d nsswitch.conf python3 rsyslog.d subuid- vtrgb
gen_skeleton_cont
As can be seen, the two invocations of ls give different results. Maybe the container hasn't finished loading? I must be missing something.
If it helps, the full repository is here (Including Docker files):
https://github.com/realcr/mailman_docker

I think I found the problem. It is not related to Docker at all. It's a bash thing.
When invoking:
docker run -it --name gen_skeleton_cont \
mailman_server \
ls /etc && \
echo "Second ls:" && \
ls /etc \
The first ls happens inside the Docker container, however the second one happens inside the host machine. I should find some other way to run multiple commands inside a Docker container, maybe using another .sh file.

Related

How to add an entry to the hosts file inside a Docker container?

I have a Kafka instance running on my local machine (macOS Mojave) and I'm trying to have a Docker container see that.
There are two files in the Java program that will be built as the Docker container:
docker-entrypoint.sh:
#!/bin/bash
HOST_DOMAIN="kafka"
HOST_IP=$(awk '/32 host/ { print f } {f=$2}' <<< "$(</proc/net/fib_trie)" | head -n 1)
Dockerfile:
# ...
COPY docker-entrypoint.sh ./
RUN chmod 755 docker-entrypoint.sh
RUN apt-get install -y sudo
CMD ["./docker-entrypoint.sh"]
Now I want to write the following line:
$HOST_IP\t$HOST_DOMAIN
to /etc/hosts so the Docker container can work with Kafka. How can I do that, considering elevated access is needed to write to that file? I have tried these:
1- Changing CMD ["./docker-entrypoint.sh"] to CMD ["sudo", "./docker-entrypoint.sh"]
2- Using sudo tee
3- Using su root;tee ...
4- Running echo "%<user> ALL=(ALL) ALL" | tee -a /etc/sudoers > /dev/null, so I can then tee ... without sudo.
1, 2, and 3 lead to the following error:
sudo: no tty present and no askpass program specified
I don't understand this error. A search for it had solutions for when one is sshing to run a command, but here there is no ssh.
To do 4, I already need to be sudo, correct?
So, how can I achieve what I'm looking to do?
Dockerfile commands typically run as root unless you've changed the user account, so you should not need sudo.
You don't need to edit any hosts file
You can use host.docker.internal to reach the host from the container
https://docs.docker.com/docker-for-mac/networking/
Otherwise, just run Kafka in a container if you want to setup things locally

kubectl bash completion doesn't work in ubuntu docker container

I'm using kubectl from within a docker container running on a Mac. I've already successfully configured the bash completion for kubectl to work on the Mac, however, it doesn't work within the docker container. I always get bash: _get_comp_words_by_ref: command not found.
The docker image is based on ubuntu:16.04 and kubectl is installed via the line (snippet from the dockerfile)
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && \
mv kubectl /usr/local/bin
echo $BASH_VERSION gives me 4.3.48(1)-release, and according to apt, the bash-completionpackage is installed.
I'm using iTerm2 as terminal.
Any idea why it doesn't work or how to get it to work?
Ok, I found it - I simply needed to do a source /etc/bash_completion before or after the source <(kubectl completion bash).
check .bashrc
enable programmable completion features (you don't need to enable
this, if it's already enabled in /etc/bash.bashrc and /etc/profile
sources /etc/bash.bashrc).
if [ -f /etc/bash_completion ] && ! shopt -oq posix; then
. /etc/bash_completion
fi
A Linux container executed on macOS creates a separate environment and
yes, it looks like a thread from macOS shell, but it is not. Shell history,
properties, functions are a different story.
Moreover, if the container has no persistent volume mounted all of those parameters will be transisten and won’t survive container’s restart.
The approach to have bash completion of both of them - macOS and Ubuntu
Linux are similar, but require different steps to take:
macOS side - permanent support for kubectl bash completion:
use homebrew to install support:
brew install bash-completion
kubectl completion bash > $(brew --prefix)/etc/bash_completion.d/kubectl
Ubuntu container’s approach to have kubectl and bash completion support build in:
You can adapt this set of commands and use it in Dockerfile during the image preparation:
apt-get update && apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubectl
echo 'source <(kubectl completion bash)' >> ~/.bashrc
If afterwards you or user executes /bin/bash in running container then you should get completion working.
docker exec -it docker_image_id /bin/bash
this will start bash shell with the bash completion.
I united two top comments for Ubuntu 22.04
edit ~/.bashrc and add
source /etc/bash_completion
before
source <(kubectl completion bash)
alias k=kubectl
complete -o default -F __start_kubectl k

Cant disable WiFi power management Raspberry Pi 3

Whenever i access sudo nano /etc/network/interfaces the file is essentially empty which is hindering me because i need to disable the power saving feature that automatically disables the wifi after a minute or so
This is what shows in my file
# interfaces(5) file used by ifup(8) and ifdown(8)
# Please note that this file is written to be used with dhcpcd
# For static IP, consult /etc/dhcpcd.conf and 'man dhcpcd.conf'
# Include files from /etc/network/interfaces.d:
source-directory /etc/network/interfaces.d
Because of this there is no where i can add the wireless-power off text and have it work.I have already tried to add this just at the bottom but it does not work.
Firstly you should repost this over at https://raspberrypi.stackexchange.com/
Secondly I am just facing the same issue, and solved it by entering this line into the crontab:
#reboot /sbin/iw dev wlan0 set power_save off&
I used the below to PERSISTENTLY kill WiFi Power Management across reboots. It's done as a systemd service so independent of how the network interfaces are configured and "just works".
Should work on any modern Pi which has systemd. Just copy & paste below bash script into a file, set it to executable and sudo ./fileName.sh:
if [ -d /root/scripts ]; then
mkdir /root/scripts
fi
apt-get -y install iw
apt-get -y install wireless-tools
cat <<EOF> /root/scripts/pwr-mgmnt-wifi-disable.sh
#!/bin/bash
iw dev wlan0 set power_save off
EOF
chmod 700 /root/scripts/pwr-mgmnt-wifi-disable.sh
cat <<EOF> /etc/systemd/system//pwr-mgmnt-wifi-disable.service
[Unit]
Description=Disable WiFi Power Management
Requires=network-online.target
After=hostapd.service
[Service]
User=root
Group=root
Type=oneshot
ExecStart=/root/scripts/pwr-mgmnt-wifi-disable.sh
[Install]
WantedBy=multi-user.target
EOF
chmod 644 /etc/systemd/system/pwr-mgmnt-wifi-disable.service
systemctl enable pwr-mgmnt-wifi-disable.service
systemctl start pwr-mgmnt-wifi-disable.service

boot2docker startup script to mount local shared folder with host

I'm running boot2docker 1.3 on Win7.
I want to connect a shared folder.
In the VirtualBox Manager under the image properties->shared folders I've added the folder I've want and named it "c/shared". The "auto-mount" and "make permanent" boxes are checked.
When boot2docker boots, it isn't mounted though. I have to do an additional:
sudo mount -t vboxsf c/shared /c/shared
for it to show up.
Since I need that for every time I'll ever use docker, I'd like that to just run on boot, or just already be there. So I thought if there were some startup script I could add, but I can't seem to find where that would be.
Thanks
EDIT: It's yelling at me about this being a duplicate of Boot2Docker on Mac - Accessing Local Files which is a different question. I wanted to mount a folder that wasn't one of the defaults such as /User on OSX or /c/Users on windows. And I'm specifically asking for startup scripts.
/var/lib/boot2docker/bootlocal.sh fits your need probably, it will be run by initial script /opt/bootscripts.sh
And bootscripts.sh will also put the output into the /var/log/bootlocal.log, see segment below (boot2docker 1.3.1 version)
# Allow local HD customisation
if [ -e /var/lib/boot2docker/bootlocal.sh ]; then
/var/lib/boot2docker/bootlocal.sh > /var/log/bootlocal.log 2>&1 &
fi
One use case for me is
I usually put shared directory as /c/Users/larry/shared, then I add script
#/bin/bash
ln -s /c/Users/larry/shared /home/docker/shared
So each time, I can access ~/shared in boot2docker as the same as in host
see FAQ.md (provided by #KCD)
If using boot2docker (Windows) you should do following:
First create shared folder for boot2docker VM:
"C:/Program Files/Oracle/VirtualBox/VBoxManage" sharedfolder add default -name some_shared_folder -hostpath /c/some/path/on/your/windows/box
#Then make this folder automount
docker-machine ssh
vi /var/lib/boot2docker/profile
Add following at the end of profile file:
sudo mkdir /windows_share
sudo mount -t vboxsf some_shared_folder /windows_share
Restart docker-machine
docker-machine restart
Verify that folder content is visible in boot2docker:
docker-machine ssh
ls -al /windows_share
Now you can mount the folder either using docker run or docker-compose.
Eg:
docker run it --rm --volume /windows_share:/windows_share ubuntu /bin/bash
ls -al /windows_share
If changes in the profile file are lost after VM or Windows restart please do following:
1) Edit file C:\Program Files\Docker Toolbox\start.sh and comment out following line:
#line number 44 (or somewhere around that)
yes | "${DOCKER_MACHINE}" regenerate-certs "${VM}"
#change the line above to:
# yes | "${DOCKER_MACHINE}" regenerate-certs "${VM}"
Thanks for your help with this. An additional few flags I needed to add, in order for the new mount to be accessible by the boot2docker "docker" user:
sudo mount -t vboxsf -o umask=0022,gid=50,uid=1000 Ext-HD /Volumes/Ext-HD
With docker 1.3 you do not need to manually mount anymore. Volumes should work properly as long as the source on the host vm is in your user directory.
https://blog.docker.com/2014/10/docker-1-3-signed-images-process-injection-security-options-mac-shared-directories/
I can't make it work following Larry Cai's instruction. I figured I could make changes to "c:\Program Files\Boot2Docker for Windows\start.sh", add below
eval "$(./boot2docker.exe shellinit 2>/dev/null | sed 's,\\,\\\\,g')"
your mount command
eval "$(./boot2docker ssh 'sudo mount -t vboxsf c/shared /c/shared')"
I also add the command to start my container here.
eval "$(docker start KDP)"

Transmission will not run shell script after torrent download completed

I am looking to have the Transmission bittorrent client execute a script that changes the owner and permissions of all torrents in the completed folder when a torrent completes downloading.
I am using the following relevant settings in /etc/transmission-daemon/settings.json:
"download-dir": "/data/transmission/completed",
"script-torrent-done-enabled": true,
"script-torrent-done-filename": "/home/user/script.sh",
The script does not seem to be executing after a torrent completes, I know there are other issues that could be going on aside the from the content of the script itself. The owner of the script file is debian-transmission and I have the permissions set to 777, so there shouldn't be an issues with Transmission accessing the script unless I have missed something here.
The /home/user/script.sh file is as follows:
#!/bin/bash
echo sudopassword | /usr/bin/sudo -S /bin/chmod -f -R 777 /data/transmission/completed
echo sudopassword | /usr/bin/sudo -S /bin/chown -f -R user /data/transmission/completed
I know it is poor form to use a sudo command in this fashion, but I can execute the script on it's own and it will work correctly. I am not sure why Transmission is not executing the script. Transmission supports some environment variables such as TR_TORRENT_NAME that I would like to use once the script is being triggered. Is there anything I am not setting up in the file that would prevent the script from working correctly and how would I use environment variables?
I'll probably answer a different question here, but if you're trying this simply to gain write permissions on your Transmission Daemon's downloads to your user, try a different approach.
I'm running my Transmission Daemon under my username, as set in it's systemd service file. (/etc/systemd/system/multi-user.target.wants/transmission-daemon.service in my case)
[Unit]
Description=Transmission BitTorrent Daemon
After=network.target
[Service]
User=myuser # set user here
Group=mygroup # set group here :)
UMask=0022 # 0022 gives 644 permissions on files (u+w), 0002 gives 644 (g+w), 0000 gives 666 (a+w)
Type=notify
ExecStart=/usr/bin/transmission-daemon -f --log-error
ExecStop=/bin/kill -s STOP $MAINPID
ExecReload=/bin/kill -s HUP $MAINPID
[Install]
WantedBy=multi-user.target
Notice User, Group and UMask (with capital M) directives.
See Execution environment configuration for Systemd manpage.
Then run:
sudo chown -fR user /data/transmission/completed
sudo systemctl daemon-reload
sudo service transmission-daemon restart
and you should set :)
Add the user who will execute the script to a group with default sudo access.
Fedora - add user to the wheel group
sudo usermod -aG wheel $(whoami)
Ubuntu - user group: sudo or admin (deprecated)

Resources