Troubleshooting KVM guest's anaconda installer logs - anaconda

I am trying to install RHEL7.9 KVM guest on Ubuntu 18.04 Azure VM. The anaconda installer fails due to some error but the virt-viewer screen got closed so fast, I was unable to read what is the exact error. I know all anaconda logs get stored in /tmp/anaconda.log file on the KVM guest disk image but I am unable to figure out a way to check the contents of the file. I tried mounting the KVM guest disk image using "mount -o loop .img " command but it fails with NTFS signature is missing, thats probably because the installation fails before the KVM guest's disk is partitioned properly. I am looking for ways to check the contents of that file. Is there any way to redirect the anaconda logs of the guest machine to the Ubuntu host machine ? Pasting the virt-install script and kickstart file used. The RHEL7.9 installation media was downloaded from https://developers.redhat.com/products/rhel/download site.
virt-install.sh
virt-install --location /datadrive/iso_images/rhel7.9-dvd.iso \
--disk /datadrive/rhel79-oracle-$1.img,size=40,format=raw \
--os-variant rhel7.0 \
--initrd-inject ./ks-rhel79-oracle.cfg \
--extra-args="ks=file:/ks-rhel79-oracle.cfg" \
--vcpus 2 \
--memory 2048 \
--noreboot \
--name rhel79-oracle-$1
ks-rhel79-oracle.cfg
#version=DEVEL
# System authorization information
auth --passalgo=sha512 --useshadow
text
firstboot --disable
# Keyboard layouts
keyboard --vckeymap=us --xlayouts='us'
# System language
lang en_US.UTF-8
# Installation logging level
logging --level=debug
# Network information
network --bootproto=dhcp --device=link --activate
network --bootproto=dhcp --hostname=Azure-image
# Shutdown after installation
shutdown
# Root password
rootpw --plaintext password
# SELinux configuration
selinux --disabled
# System services
services --enabled="sshd,chronyd"
# Do not configure the X Window System
skipx
# System timezone
timezone US/Eastern
# System bootloader configuration
bootloader --append="rootdelay=60 mpath crashkernel=2048M intel_idle.max_cstate=1 processor.max_cstate=1 transparent_hugepage=never numa_balancing=disable mce=ignore_ce modprobe.blacklist=kvm_intel,kvm,iTCO_wdt,iTCO_vendor_support,sb_edac,edac_core" --location=mbr
# Partition scheme
zerombr
clearpart --all
# Disk partitioning information
part swap --fstype="swap" --size=32768
part / --fstype="xfs" --grow --size=6144
%post --logfile=/root/anaconda-composer.log --erroronfail
# Remove random-seed
rm /var/lib/systemd/random-seed
# Clear /etc/machine-id
rm /etc/machine-id
touch /etc/machine-id
%end
%packages
#base
%end
%addon com_redhat_kdump --enable --reserve-mb=2048
%end

The below solution worked for me. Used below commands to start rsyslog on port 6080 on the host machine (Redhat7.8 Azure VM) and modified the virt-install script as below to direct anaconda logging to the host machine
yum install -y anaconda
mkdir -p /home/shaaga/remote_inst
eval `analog -p 6080 -o rsyslogd.conf -s /home/shaaga/remote_inst
virt-install --location /datadrive/iso_images/rhel7.9-dvd.iso
--disk /datadrive/rhel79-oracle-$1.img,size=40,format=raw
--os-variant rhel7.0
--initrd-inject ./ks-rhel79-oracle.cfg
--extra-args="ks=file:/ks-rhel79-oracle.cfg"
--vcpus 2
--memory 2048
--noreboot
--name rhel79-oracle-$1 --channel tcp,host=127.0.0.1:6080,mode=connect,target_type=virtio,name=org.fedoraproject.anaconda.log.0

Related

Testcontainers with Podman in Java tests

Is it possible to use Testcontainers with Podman in Java tests?
As of March 2022 Testcontainers library doesn't detect an installed Podman as a valid Docker environment.
Can Podman be a Docker replacement on both MacOS with Apple silicon (local development environment) and Linux x86_64 (CI/CD environment)?
It is possible to use Podman with Testcontainers in Java projects, that use Gradle on Linux and MacOS (both x86_64 and Apple silicon).
Prerequisites
Podman Machine and Remote Client are installed on MacOS - https://podman.io/getting-started/installation#macos
Podman is installed on Linux - https://podman.io/getting-started/installation#linux-distributions
Enable the Podman service
Testcontainers library communicates with Podman using socket file.
Linux
Start Podman service for a regular user (rootless) and make it listen to a socket:
systemctl --user enable --now podman.socket
Check the Podman service status:
systemctl --user status podman.socket
Check the socket file exists:
ls -la /run/user/$UID/podman/podman.sock
MacOS
Podman socket file /run/user/1000/podman/podman.sock can be found inside the Podman-managed Linux VM. A local socket on MacOS can be forwarded to a remote socket on Podman-managed VM using SSH tunneling.
The port of the Podman-managed VM can be found with the command podman system connection list --format=json.
Install jq to parse JSON:
brew install jq
Create a shell alias to forward the local socket /tmp/podman.sock to the remote socket /run/user/1000/podman/podman.sock:
echo "alias podman-sock=\"rm -f /tmp/podman.sock && ssh -i ~/.ssh/podman-machine-default -p \$(podman system connection list --format=json | jq '.[0].URI' | sed -E 's|.+://.+#.+:([[:digit:]]+)/.+|\1|') -L'/tmp/podman.sock:/run/user/1000/podman/podman.sock' -N core#localhost\"" >> ~/.zprofile
source ~/.zprofile
Open an SSH tunnel:
podman-sock
Make sure the SSH tunnel is open before executing tests using Testcontainers.
Configure Gradle build script
build.gradle
test {
OperatingSystem os = DefaultNativePlatform.currentOperatingSystem;
if (os.isLinux()) {
def uid = ["id", "-u"].execute().text.trim()
environment "DOCKER_HOST", "unix:///run/user/$uid/podman/podman.sock"
} else if (os.isMacOsX()) {
environment "DOCKER_HOST", "unix:///tmp/podman.sock"
}
environment "TESTCONTAINERS_RYUK_DISABLED", "true"
}
Set DOCKER_HOST environment variable to Podman socket file depending on the operating system.
Disable Ryuk with the environment variable TESTCONTAINERS_RYUK_DISABLED.
Moby Ryuk helps you to remove containers/networks/volumes/images by given filter after specified delay.
Ryuk is a technology for Docker and doesn't support Podman. See testcontainers/moby-ryuk#23
Testcontainers library uses Ruyk to remove containers. Instead of relying on Ryuk to implicitly remove containers, we will explicitly remove containers with a JVM shutdown hook:
Runtime.getRuntime().addShutdownHook(new Thread(container::stop));
Pass the environment variables
As an alternative to configuring Testcontainers in a Gradle build script, you can pass the environment variables to Gradle.
Linux
DOCKER_HOST="unix:///run/user/$UID/podman/podman.sock" \
TESTCONTAINERS_RYUK_DISABLED="true" \
./gradlew clean build -i
MacOS
DOCKER_HOST="unix:///tmp/podman.sock" \
TESTCONTAINERS_RYUK_DISABLED="true" \
./gradlew clean build -i
Full example
See the full example https://github.com/evgeniy-khist/podman-testcontainers
For Linux, it definitely work even though official testcontainers documentation is not really clear about it.
# Enable socket
systemctl --user enable podman.socket --now
# Export env var expected by Testcontainers
export DOCKER_HOST=unix:///run/user/${UID}/podman/podman.sock
export TESTCONTAINERS_RYUK_DISABLED=true
Sources:
https://quarkus.io/blog/quarkus-devservices-testcontainers-podman/
https://github.com/testcontainers/testcontainers-java/issues/2088#issuecomment-893404306
I was able to build on Evginiy's excellent answer, since Podman has improved in the time since the original answer. On Mac OS, these steps were sufficient for me and made testcontainers happy:
Edit ~/.testcontainers.properties and add the following line
ryuk.container.privileged=true
Then run the following
brew install podman
podman machine init
sudo /opt/homebrew/Cellar/podman/4.0.3/bin/podman-mac-helper install
podman machine set --rootful
podman machine start
If you don't want to run rootful podman, ryuk needs to be disabled:
export TESTCONTAINERS_RYUK_DISABLED="true"
Running without ryuk basically works, but lingering containers can sometimes cause problems and name collisions in automated tests. Evginiy's suggestion of a shutdown hook would resolve this, but would need code changes.
An add-on to #hollycummins answer. You can get it working without --rootful by setting the following environment variables (or their testcontainers properties counter part):
DOCKER_HOST=unix:///Users/steve/.local/share/containers/podman/machine/podman-machine-default/podman.sock`
TESTCONTAINERS_DOCKER_SOCKET_OVERRIDE=/var/run/user/501/podman/podman.sock
TESTCONTAINERS_RYUK_CONTAINER_PRIVILEGED=true
This will mount the podman socket of the linux VM into the Ryuk container. 501 is the UID of the user core in the linux VM user. See podman machine ssh.
if you running testcontainer build inside a docker container, alternatively you can start the service like this
podman system service -t 0 unix:///tmp/podman.sock &
OR
podman system service -t 0 tcp:127.0.0.1:19999 &

Netatalk on RPi, resulted in hfs+ drive read-only on RPi and not mounting on macOS

Background
I was trying to use netatalk to create Time Capsule using an Raspberry Pi 3, following the tutorial here. Some version info:
netatalk 3.1.12
macOS 10.14.5
Raspberian 4.19.50-v7+
Issues and findings
After reaching the last part of the tutorial, and able to connect over afp://, I realised that the volume is read-only.
I re-read the tutorial and realised that I didn't do the first step, because the drive is already HFS+. My guess is the ignore ownership on this volume is essential for netatalk to work properly.
Result / Symptom list
[✔︎] able to connect over afp://
[✔︎] able to mount the external drive on RPi
[𝝬] mounted drive on RPi is read-only
[𝝬] some of the directory can't be read, neither RPi nor via afp://
i.e. cp result in cp: cannot open 'filename' for reading: Permission denied
[𝝬] unable to mount the external drive on macOS
[𝝬] volume is read-only on macOS over afp://
The configurations used
/etc/fstab
proc /proc proc defaults 0 0
PARTUUID=7e67b292-01 /boot vfat defaults 0 2
PARTUUID=7e67b292-02 / ext4 defaults,noatime 0 1
/dev/sda2 /media/tm hfsplus force,rw,user,auto 0 0
/etc/netatalk/afp.conf
; Netatalk 3.x configuration file
;
[Global]
; Global server settings
; [Homes]
; basedir regex = /xxxx
;[My AFP Volume]
;path = /media/tm
[Timestone]
path = /media/tm
time machine = yes
/etc/nsswitch.conf
passwd: files
group: files
shadow: files
gshadow: files
hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4 mdns
networks: files
protocols: db files
services: db files
ethers: db files
rpc: db files
netgroup: nis
/etc/avahi/services/afpd.service
<?xml version="1.0" standalone='no'?><!--*-nxml-*-->
<!DOCTYPE service-group SYSTEM "avahi-service.dtd">
<service-group>
<name replace-wildcards="yes">%h</name>
<service>
<type>_afpovertcp._tcp</type>
<port>548</port>
</service>
<service>
<type>_device-info._tcp</type>
<port>0</port>
<txt-record>model=TimeCapsule</txt-record>
</service>
</service-group>
Attempts to fix
macOS mount doesn't work
macOS force mount doesn't work
macOS gui diskutil first aid is unable
macOS cli diskVerify is unable to mount and stopped
macOS cli diskRepair is unable to mount and stopped
RPi fsck does not seems to fix the problem
RPi fsck.hfsplus does not seems to fix the problem
Questions and directions
The drive is able to be mounted read-only with some barred access on RPi, the data is likely to be safe. Currently, the drive refuses to mount on macOS, so I can't use macOS to enable the ignore ownership on this volume.
How come the volume (HFS+, created and used on macOS) is mountable on RPi after the tutorial and became unmountable on macOS afterwards?
Give the symptoms, is there any key step that cause this (besides not check ignore ownership on this volume)?
Are there some tracks as a resolution? to either:
mount the drive on macOS, which allow me to fix the permission and backup the data
fix the permission on RPi, so the backup and be done via afp://
or, any better suggestions to overcome these obstacles.
This was driving me up the wall for a week. I take it you are trying to do this from the howtogeek or techradar article?
After the installation, from the raspberrypi I shutdown the system:
sudo shutdown -h now
I unplugged my pi then restarted it (plugged it back in) and ran the following commands:
sudo service avahi-daemon start
sudo service netatalk start
sudo systemctl enable avahi-daemon
sudo systemctl enable netatalk
It worked and I am up and running with my Time Machine!! Hope this helps!

Build to deploy guest on KVM hangs

I'm using Jenkins to automate the deploy of a virtual appliance. The first step is to build a standard CentOS 7 minimal vm in KVM. I wrote a short bash script to do this task which works when running locally on the KVM machine:
#!/bin/bash
#Variables
diskpath="/var/lib/libvirt/images/"
buildname=$(date +"%m-%d-%y-%H-%M")
vmextension=".dsk"
#Change to images directory
cd /var/lib/libvirt/images/
#Deploy VM with with kickstart file
sudo virt-install \
--name=$buildname \
--nographics \
--hvm \
--virt-type=kvm \
--file=$diskpath$buildname$vmextension \
--file-size=20 \
--nonsparse \
--vcpu=2 \
--ram=2048 \
--network bridge=br0 \
--os-type=linux \
--os-variant=generic \
--location=http://0.0.0.0/iso/ \
--initrd-inject /var/lib/libvirt/images/autobuild-ks.cfg \
--extra-args="ks=http://0.0.0.0/ks/autobuild-ks.cfg console=ttyS0"
(IP address i have changed for the purposes of security)
The ISO and the kickstart file are stored on another server and they can both be accessed via http for the purposes of making this script work. To be clear, the script does work.
The problem I have is, when I put this script into Jenkins as a build step, the script works; however, it hangs at the end after the OS has been installed and the kvm guest begins the shutdown process.
here is the kickstart file:
#version=DEVEL
# System authorization information
auth --enableshadow --passalgo=sha512
# Use Network installation media
url --url=http://0.0.0.0/iso
# Use graphical install
#graphical
# Run the Setup Agent on first boot
firstboot --enable
ignoredisk --only-use=sda
# Keyboard layouts
keyboard --vckeymap=gb --xlayouts='gb'
# System language
lang en_GB.UTF-8
# Network information
network --bootproto=dhcp --device=ens160 --ipv6=auto --activate
network --hostname=hostname.domain.com
# Root password
rootpw --iscrypted
taken_encryption_output_out_for_the_purposes_of_security
#Shutdown after installation
shutdown
# System services
services --enabled="chronyd"
# System timezone
timezone Europe/London --isUtc
# System bootloader configuration
bootloader --append=" crashkernel=auto" --location=mbr --boot-
drive=sda
autopart --type=lvm
# Partition clearing information
clearpart --none --initlabel
%packages
#^minimal
#core
chrony
kexec-tools
%end
%addon com_redhat_kdump --enable --reserve-mb='auto'
%end
%anaconda
pwpolicy root --minlen=6 --minquality=50 --notstrict --nochanges --
notempty
pwpolicy user --minlen=6 --minquality=50 --notstrict --nochanges --
notempty
pwpolicy luks --minlen=6 --minquality=50 --notstrict --nochanges --
notempty
%end
I suspect it's something to do with the shutdown option in the Kickstart file but unsure. When I ssh to the kvm server, I can see my newly created vm so the script does work but Jenkins hangs.
[root#sut-kvm01 ~]# virsh list --all
Id Name State
----------------------------------------------------
- 09-22-17-16-21 shut off
So far I have tried shutdown, reboot and obviously halt is default in the kickstart file and they have not worked for me either.
Any ideas how I can get the build to complete successfully? If it hangs, I can't move on to what will be build step number 2.
Help please :-)
Ok so I managed to figure out what the issue was. The issue was nothing to do with Jenkins or the script but rather to do with the kickstart file. In a nutshell, I was editing the wrong kickstart file. The file i was editing was the default kickstart file in the /root/ directory but that is not the same file that was being injected into memory by the script so the changes I made were having no effect.
Note to self - just because the script works, does not mean the answer to the problem isn't written in the script.

Run postgres container with data volumes through docker-machine

I have an issue with running postgres container with set up volumes for data folder on my Mac OS machine.
I tried to run it such like this:
docker run \
--name my-postgres \
-e POSTGRES_USER=admin \
-e POSTGRES_PASSWORD=password \
-e POSTGRES_DB=some_db_dev \
-v $PG_LOCAL_DATA:/var/lib/postgresql/data \
-d postgres:9.5.1
Every time I got the following result in logs:
* Starting PostgreSQL
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are enabled.
fixing permissions on existing directory /var/lib/postgresql/data ... ok
initdb: could not create directory "/var/lib/postgresql/data/pg_xlog": Permission denied
initdb: removing contents of data directory "/var/lib/postgresql/data"
Versions of docker, docker-machine, virtualbox and boot2docker are:
docker-machine version 0.6.0, build e27fb87
Docker version 1.10.2, build c3959b1
VirtualBox Version 5.0.16 r105871
boot2docker 1.10.3
I saw many publications about this topic but the most of them are outdated. I had tried do the similar solution as for mysql but it did not help.
Maybe somebody can updated me: does some solution exist to run postgres container with data volumes through docker-machine?
Thanks!
If you are running docker-machine on a Mac, at this time, you cannot mount to a directory that is not part of your local user space (/Users/<user>/) without extra configuration.
This is because on the Mac, Docker makes a bind mount automatically with the home ~ directory. Remember that since Docker is being hosted in a VM that isn't your local Mac OS, any volume mounts are relative to the host VM - not your machine. That means by default, Docker cannot see your Mac's directories since it is being hosted on a separate VM from your Mac OS.
Mac OS => Linux Virtual Machine => Docker
^------------------^
Docker Can See VM
^-----------------X----------------^
Docker Can't See Here
If you open VirtualBox, you can create other mounts (i.e. shared folders) to your local machine to the host VM and then access them that way.
See this issue for specifics on the topic: https://github.com/docker/machine/issues/1826
I believe the Docker team is adding these capabilities in upcoming releases (especially since a native Mac version is in short works).
You should use docker named volumes instead of folders on your local file system.
Try creating a volume:
docker volume create my_vol
Then mount the data directory in your above command:
docker run \
--name my-postgres \
-e POSTGRES_USER=admin \
-e POSTGRES_PASSWORD=password \
-e POSTGRES_DB=some_db_dev \
-v my_vol:/var/lib/postgresql/data \
-d postgres:9.5.1
Checkout my blog post for a whole postgres, node ts docker setup for both dev and prod: https://yzia2000.github.io/blog/2021/01/15/docker-postgres-node.html
For more on docker volumes: https://docs.docker.com/storage/volumes/

How do I get Docker to run on a Windows system behind a corporate firewall?

I'm trying to get a working Docker installation following this tutorial:
http://docs.docker.io/en/latest/installation/windows/
So far, I got the VM running with a manually downloaded repository (followed the GitHub link and downloaded as a ZIP file, because "git clone" didn't work behind my corporate proxy, even after setting up the proxy with "git conf --global http.proxy ..." - it kept asking me for authentification 407, although I entered my user name and password).
Now I am in the state in which I should use "docker run busybox echo hello world" (Section "Running Docker").
When I do this, I first get told that Docker is not installed (as shown at the bottom of the tutorial), and then, after I got it with apt-get install docker, I get "Segmentation Fault or critical error encountered. Dumping core and aborting."
What can I do now? Is this because I didn't use git clone or is something wrong with the Docker installation? I read somewhere, that apt-get install docker doesn't install the Docker I want, but some GNOME tool. Can I maybe specify my apt-request to get the right tool?
Windows Boot2Docker behind corporate proxy
(Context: March 2015, Windows 7, behind corporate proxy)
TLDR; see GitHub project VonC/b2d:
Clone it and:
configure ..\env.bat following the env.bat.template,
add the alias you want in the 'profile' file,
execute senv.bat then b2d.bat.
You then are in a properly customized boot2docker environment with:
an ssh session able to access internet behind corporate proxy when you type docker search/pull.
Dockerfiles able to access internet behind corporate proxy when they do an apt-get update/install and you type a docker build.
Installation and first steps
If you are admin of your workstation, you can run boot2docker install on your Windows.
It currently comes with:
Boot2Docker 1.5.0 (Docker v1.5.0, Linux v3.18.5)
Boot2Docker Management Tool v1.5.0
VirtualBox v4.3.20-r96997
msysGit v1.9.5-preview20141217
Then, once installed:
add c:\path\to\Boot2Docker For Windows\ in your %PATH%
(one time): boot2docker init
boot2docker start
boot2docker ssh
type exit to exit the ssh session, and boot2docker ssh to go back in: the history of commands you just typed is preserved.
if you want to close the VM, boot2docker stop
You actually can see the VM start or stop if you open the Virtual Box GUI, and type in a DOS cmd session boot2docker start or stop.
Hosts & Proxy: Windows => Boot2Docker => Docker Containers
The main point to understand is that you will need to manage 2 HOSTS:
your Windows workstation is the host to the Linux Tiny Core run by VirtualBox in order for you to define and run containers
(%HOME%\.boot2docker\boot2docker.iso =>
.%USERPROFILE%\VirtualBox VMs\boot2docker-vm\boot2docker-vm.vmdk),
Your boot2docker Linux Tiny Core is host to your containers that you will run.
In term of proxy, that means:
Your Windows Host must have set its HTTP_PROXY, HTTPS_PROXY and NO_PROXY environment variable (you probably have them already, and they can be used for instance by the Virtual Box to detect new versions of Virtual Box)
Your Tiny Core Host must have set http_proxy, https_proxy and no_proxy (note the case, lowercase in the Linux environment) for:
the docker service to be able to query/load images (for example: docker search nginx).
If not set, the next docker pull will get you a dial tcp: lookup index.docker.io: no such host.
This is set in a new file /var/lib/boot2docker/profile: it is profile, not .profile.
the docker account (to be set in /home/docker/.ashrc), if you need to execute any other command (other than docker) which would require internet access)
any Dockerfile that you would create (or the next RUN apt-get update will get you a, for example, Could not resolve 'http.debian.net').
That means you must add the lines ENV http_proxy http://... first, before any RUN command requiring internet access.
A good no_proxy to set is:
.company,.sock,localhost,127.0.0.1,::1,192.168.59.103
(with '.company' the domain name of your company, for the internal sites)
Data persistence? Use folder sharing
The other point to understand is that boot2docker uses Tiny Core, a... tiny Linux distribution (the .iso file is only 26 MB).
And Tiny Core offers no persistence (except for a few technical folders): if you modify your ~/.ashrc with all your preferred settings and alias... the next boot2docker stop / boot2docker start will restore a pristine Linux environment, with your modification gone.
You need to make sure the VirtualBox has the Oracle_VM_VirtualBox_Extension_Pack downloaded and added in the Virtual Box / File / Settings / Extension / add the Oracle_VM_VirtualBox_Extension_Pack-4.x.yy-zzzzz.vbox-extpack file).
As documented in boot2docker, you will have access (from your Tiny Core ssh session) to /c/Users/<yourLogin> (ie the %USERPROFILE% is shared by Virtual Box)
Port redirection? For container and for VirtualBox VM
The final point to understand is that no port is exported by default:
your container ports are not visible from your Tiny Core host (you must use -p 80:80 for example in order to expose the 80 port of the container to the 80 port of the Linux session)
your Tiny Cort ports are not exported from your Virtual Box VM by default: even if your container is visible from within Tiny Core, your Windows browser won't see it: http://127.0.0.1 won't work "The connection was reset".
For the first point, docker run -it --rm --name my-apache-app -v "$PWD":/usr/local/apache2/htdocs/ httpd:2.4 won't work without a -p 80:80 in it.
For the second point, define an alias doskey vbm="c:\Program Files\Oracle\VirtualBox\VBoxManage.exe" $*, and then:
- if the Virtual Box 'boot2docker-vm' is not yet started, uses vbm modifyvm
- if the Virtual Box 'boot2docker-vm' is already started, uses vbm controlvm
Typically, if I realize, during a boot2docker session, that the port 80 is not accessible from Windows:
vbm controlvm "boot2docker-vm" natpf1 "tcp-port80,tcp,,80,,80";
vbm controlvm "boot2docker-vm" natpf1 "udp-port80,udp,,80,,80";
Then, and only then, I can access http://127.0.0.1
Persistent settings: copied to docker service and docker account
In order to use boot2docker easily:
create on Windows a folder %USERPROFILE%\prog\b2d
add a .profile in it (directly in Windows, in%USERPROFILE%\prog\b2d), with your settings and alias.
For example (I modified the original /home/docker/.ashrc):
# ~/.ashrc: Executed by SHells.
#
. /etc/init.d/tc-functions
if [ -n "$DISPLAY" ]
then
`which editor >/dev/null` && EDITOR=editor || EDITOR=vi
else
EDITOR=vi
fi
export EDITOR
# Alias definitions.
#
alias df='df -h'
alias du='du -h'
alias ls='ls -p'
alias ll='ls -l'
alias la='ls -la'
alias d='dmenu_run &'
alias ce='cd /etc/sysconfig/tcedir'
export HTTP_PROXY=http://<user>:<pwd>#proxy.company:80
export HTTPS_PROXY=http://<user>:<pwd>#proxy.company:80
export NO_PROXY=.company,.sock,localhost,127.0.0.1,::1,192.168.59.103
export http_proxy=http://<user>:<password>#proxy.company:80
export https_proxy=http://<user>:<password>#proxy.company:80
export no_proxy=.company,.sock,localhost,127.0.0.1,::1,192.168.59.103
alias l='ls -alrt'
alias h=history
alias cdd='cd /c/Users/<user>/prog/b2d'
ln -fs /c/Users/<user>/prog/b2d /home/docker
(192.168.59.103 is usually the ip returned by boot2docker ip)
Putting everything together to start a boot2docker session: b2d.bat
create and add a b2d.bat script in your %PATH% which will:
start boot2docker
copy the right profile, both for the docker service (which is restarted) and for the /home/docker user account.
initiate an interactive ssh session
That is:
doskey vbm="c:\Program Files\Oracle\VirtualBox\VBoxManage.exe" $*
boot2docker start
boot2docker ssh sudo cp -f /c/Users/<user>/prog/b2d/.profile /var/lib/boot2docker/profile
boot2docker ssh sudo /etc/init.d/docker restart
boot2docker ssh cp -f /c/Users/<user>/prog/b2d/.profile .ashrc
boot2docker ssh
In order to enter a new boot2docker session, with your settings defined exactly as you want, simply type:
b2d
And you are good to go:
End result:
a docker search xxx will work (it will access internet)
any docker build will work (it will access internet if the ENV http_proxy directives are there)
any Windows file from %USERPROFILE%\prog\b2d can be modified right from ~/b2d.
Or you actually can write and modify those same files (like some Dockerfile) right from your Windows session, using your favorite editor (instead of vi)
And all this, behind a corporate firewall.
Bonus: http only
Tuan adds in the comments:
Maybe my company's proxy doesn't allow https. Here's my workaround:
boot2docker ssh,
kill the docker process and
set the proxy export http_proxy=http://proxy.com, then
start docker with docker -d --insercure-registry docker.io

Resources