Stackdriver agent in docker container - bash

Is it possible to set up a generic Docker image with Stackdriver monitoring agents so it can send logging data within the container to Stackdriver, then which can be used across any VM instances regardless of GCE and AWS?
Update
FROM ubuntu:16.04
USER root
ADD . /
ENV GOOGLE_APPLICATION_CREDENTIALS="/etc/google/auth/application_default_credentials.json"
RUN apt-get update && apt-get -y --no-install-recommends install wget curl python-minimal ca-certificates lsb-release libidn11 openssl && \
RUN curl -sSO https://dl.google.com/cloudagents/install-logging-agent.sh
RUN bash install-logging-agent.sh
Im exactly following what's been said in the documentation. The installation goes fine. But google-fluentd is failing to start/restart.
Thanks in advance.

Yes, this should be possible according to the documentation.
You will need to make sure that Stackdriver agent is installed and configured correctly in your docker image.

Related

apt-get command not found in coreOS (cos)

I would like to install tools in my cluster VM to debug, like dnsutils or mysql to test connections.
My cluster VM use container optimized OS (cos).
Whenever I try
apt-get update
I got an error
-bash: apt-get: command not found
How could I achieve this ?
As explained here, execute
/usr/bin/toolbox
It will download docker images and login inside once completed, as root user.
You will be able to execute commands like apt-get update / install and debug

Can you convert/build a docker image into a full OS image?

I have made a docker container meant with some code for deployment. However, I realised that the structure of the project I'm working with, it's more suitable to deploy a full ISO image, instead of running docker on top of a cloud VM running stock debian, leading to unnecessary layers of virtualization.
I know that dockers are meant to be deployed on kubernetes, but before diving into that route, is there a simple way to convert a deb9 docker image into a full deb9 OS image? Like an opposite of docker import?
You can convert your docker container to an full os image. An Dockerfile debian example would be
FROM debian:stretch
RUN apt-get -y install --no-install-recommends \
linux-image-amd64 \
systemd-sysv
In principal you have to install a kernel and an init system.
Full instructions can be found github
Docker images don't contain a Linux kernel and aren't configured to do things like run a full init system or go out and get their network configuration, so this won't work well.
Put differently: the same mismatch that makes docker import not really work well because the resulting container will have too much stuff and won't be set up for Docker, means you can't export a container or image into a VM and expect it to work in that environment.
But! If you've written a Dockerfile for your image, that's very close to saying "I'm going to start from this base Linux distribution and run this shell script to get an image". Tools like Packer are set up to take a base Linux VM image, run some sort of provisioner, and create a new image. If you have a good Docker setup already, and decide a VM is a better fit, you're probably done a lot of the leg-work already to have an automated build of your VM image.
#info
we can use docker like vm / flatpak / appimge / termux with gui, audio and hardware acceleration :)
how to do it ?
remove old docker
sudo apt-get remove docker docker-engine docker.io containerd runc
===================
install the commponents
sudo apt-get update
sudo apt install apt-transport-https ca-certificates curl gnupg lsb-release
=============================
3.. add docker repo docker
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo
"deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
================
install docker
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io
=============================
=========================
create docker
docker image
su
cd /home/
kwrite Dockerfile
FROM debian:testing
ARG DEBIAN_FRONTEND=noninteractive­
RUN apt-get update && apt-get -y install midori bash mpv pulseaudio pulseaudio-utils pulsemixer pulseaudio-module-jack apulse neofetch vlc smplayer wget sudo cairo-dock cairo-dock-plug-ins xfce4 xfce4-goodies falkon kde-full xfce4-whiskermenu-plugin gnome tigervnc-standalone-server openssh-server openssh-client network-manager net-tools iproute2 gerbera openjdk-11-jdk mediainfo dcraw htop gimp krita libreoffice python3-pip terminator uget alsa-utils
ENV PULSE_SERVER=tcp:host.docker.internal:4713
CMD bash
save it
the run interminal
sudo docker build -t supersifobian .
how to use gui
xhost +
run image gui
sudo apt-get -y install xpra xserver-xephyr xinit xauth
xclip x11-xserver-utils x11-utils
run docker
sudo docker run -ti --net=host --device=/dev/dri:/dev/dri -e DISPLAY=:0 --privileged --cap-add=ALL --device /dev/snd --volume /dev:/dev -v /dev:/dev --group-add audio -v /var/run/docker.sock:/host/var/run/doc -e PULSE_SERVER=tcp:$P ULSE_SERVER -v /dev:/host/dev -v /proc:/host/proc:ro -v /boot:/host/boot:ro -v /lib/modules:/host/lib/modules:ro -v /usr:/host/usr:ro -v /media:/host/media:ro -v /home:/host/home:ro imageid / image name
nb
enter container
docker exec -it nama/id container bash
add user
adduser namauser
make use as sudo
usermod -aG sudo username
add group audio
usermod -aG audio username
=========================
audio in docker cli
using prafer
a. install paprefs
apt-get install paprefs
b.choose network in prafers
c. how to know the pulse audio port, we can type
pax11publish
d. export to terminal in docker
export "PULSE_SERVER=tcp:192.168.43.135:37721"
==================================
save docker
docker commit idcontainer nameimage:version
========================
check images
docker images
==================================
thanks :)

How to access /var/run/docker.sock from inside a docker container as a non-root user? (MacOS Host)

I have installed docker on Mac and everything is running fine. I am using a Jenkins docker image and running it. While using Jenkins as a CI server and to build further images by running docker commands through it, I came to know that we have to bind mount /var/run/docker.sock while running the Jenkins images so it can access the docker daemon.
I did that, and installed docker CLI inside Jenkins’s container. But when running docker ps or any other docker commands it is throwing an error:
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.28/containers/json: dial unix /var/run/docker.sock: connect: permission denied
When I connect to container as a root user, it works fine. But switching to the ‘jenkins’ user throws the above error. I have already added ‘jenkins’ user to sudo list but does not help.
I found few articles suggesting to add ‘jenkins’ user to ‘docker’ group but to my surprise I do not find any docker group on Mac or inside container.
Any help is much appreciated. Thanks
It looks like the reason this is happening is pretty straight forward: UNIX permissions are not letting the jenkins user read /var/run/docker.sock. Really the easiest option is to just change the group assignment on /var/run/docker.sock from root to another group, and then add jenkins to that group:
[as root, inside the container]
root#host:/# usermod -G docker jenkins
root#host:/# chgrp docker /var/run/docker.sock
This assumes of course that you already have the docker CLI installed, and that a group called docker exists. If not:
[as root, inside the container]
root#host:/# groupadd docker
Alternatively, you could change the world permissions on /var/run/docker.sock to allow non-root users to access the socket, but I wouldn't recommend doing that; it just seems like bad security practice. Similarly, you could outright chown the socket to the jenkins user, although I'd rather just change the group settings.
I'm confused why using sudo didn't work for you. I just tried what I believe is exactly the setup you described and it worked without problems.
Start the container:
[on macos host]
darkstar:~$ docker run \
-v /var/run/docker.sock:/var/run/docker.sock \
docker.io/jenkins/jenkins:lts
darkstar:~$ docker exec -u root -it <container id> /bin/bash
Install Docker CLI:
[as root, inside container]
root#host:/# apt-get update
root#host:/# apt-get -y install apt-transport-https \
ca-certificates \
curl \
gnupg2 \
software-properties-common
root#host:/# rel_id=$(. /etc/os-release; echo "$ID")
root#host:/# curl -fsSL https://download.docker.com/linux/${rel_id}/gpg > /tmp/dkey
root#host:/# apt-key add /tmp/dkey
root#host:/# add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/${rel_id} \
$(lsb_release -cs) stable"
root#host:/# apt-get update
root#host:/# apt-get -y install docker-ce
Then set up the jenkins user:
[as root, inside container]
root#host:/# usermod -G sudo jenkins
root#host:/# passwd jenkins
[...]
And trying it out:
[as jenkins, inside container]
jenkins#host:/$ sudo docker ps -a
[...]
password for jenkins:
CONTAINER ID IMAGE COMMAND CREATED ...
69340bc13bb2 jenkins/jenkins:lts "/sbin/tini -- /usr/…" 8 minutes ago ...
it seems to work fine for me. Maybe you took a different route to install the Docker CLI? Not sure, but if you want to access the docker socket using sudo, those steps will work. Although, I think it would be easier to just change the group assignment as explained up above. Good luck :)
Note: All tests performed using macOS Mojave v10.14.3 running Docker Engine v19.03.2. This doesn't seem to be heavily dependent on the host platform, so I would expect it to work on Linux or any other UNIX-like OS, including other versions of macOS/OSX.
No, but this works:
Add the user (e.g. jenkins) to the staff-group: sudo dseditgroup -o edit -a jenkins -t user staff
Allow group to sudo, in sudo visudo add:
%staff ALL = (ALL) ALL

Docker - container with multiple images

I wanted to make a Dockerfile with multiple images to run in one container.
What is the best method to go about this? Below is a list of what I wanted to run in a single container. I have not have any luck with making a Dockerfile with all of these included.
MySQL Server
RabbitMQ
Java8
Node.js
Xvfb
Firefox
Chrome
This is what I have so far, can I get a few tips
FROM stackbrew/ubuntu:12.04
MAINTAINER
# Update the repository sources list #RUN apt-get update
# My SQL Server ###############
RUN apt-get
update -qq && apt-get
install -y mysql-server-5.5
ADD my.cnf /etc/mysql/conf.d/my.cnf
RUN chmod 664 /etc/mysql/conf.d/my.cnf
ADD run /usr/local/bin/run
RUN chmod +x /usr/local/bin/run V
OLUME ["/var/lib/mysql"]
EXPOSE 3306
CMD ["/usr/local/bin/run"]
You cannot have "multiple images to run in one container", that wouldn't make sense.
But you can write a Dockerfile to create an image that will install all the services you mentionned. Example (Ubuntu/Debian distribution) :
[...header...]
FROM stackbrew/ubuntu:12.04 #or use ubuntu-upstart:12.04
MAINTAINER BPetkov
# Update the repository sources list
RUN apt-get update -qq
# Mysql
RUN apt-get install -y mysql-server-5.5
ADD my.cnf /etc/mysql/conf.d/my.cnf
RUN chmod 664 /etc/mysql/conf.d/my.cnf
ADD run /usr/local/bin/run
RUN chmod +x /usr/local/bin/run
# Other stuff
RUN apt-get -y install rabbitmq
RUN apt-get -y install nodejs
[...]
VOLUME ["/var/lib/mysql"]
EXPOSE 3306
EXPOSE .......
CMD ["/sbin/init"]
Then you would have to get all of them started automatically when the container starts.
You can use a process manager such as supervisord (Docker documentation here).
Alternatively, you could use a regular init system, check this base image : ubuntu-upstart. This one would allow you to only have to install the packages in your Dockerfile and get them started automatically without any effort, by specifying /sbin/init as EntryPoint or CMD in your Dockerfile.
The feature you're looking for is Docker Compose.

Installing and Managing Jenkins on Amazon Linux

I'm looking to move Jenkins to Amazon EC2 running Amazon Linux.
Currently we have Jenkins installed as a package (via yum). I'm considering running Jenkins as the contained jenkins.war on EC2 (for auto-upgrades and ease of deployment).
Unfortunately I've been unable to find much documentation regarding managing jenkins as the latter.
I'm trying to determine:
Which installation is preferred, and why?
If running as a contained jar:
How do I start/stop jenkins?
Should I create a jenkins user?
Installation Steps :
Please launch an Amazon Linux instance using Amazon Linux AMI.
Login to your Amazon Linux instance.
Become root using “sudo su -” command.
Update your repositories
yum update
Get Jenkins repository using below command
wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat-stable/jenkins.repo
Get Jenkins repository key
rpm --import http://pkg.jenkins-ci.org/redhat-stable/jenkins-ci.org.key
Install jenkins package
yum install jenkins
Start jenkins and make sure it starts automatically at system startup
service jenkins start
chkconfig jenkins on
Open your browser and navigate to http://<Elastic-IP>:8080. You will see jenkins dashboard.
That’s it. You have your jenkins setup up and running. Now, you can create jobs to build the code.
Reference: http://sanketdangi.com/post/62715793234/install-configure-jenkins-on-amazon-linux
Jenkins Installation Ubuntu 14.04/16.01
Please follow the steps given below.
Switch to root user sudo su -
sudo apt-get update
sudo apt-get install default-jdk
sudo apt-get install default-jre
wget -q -O - https://pkg.jenkins.io/debian/jenkins-ci.org.key | sudo apt-key add -
echo deb https://pkg.jenkins.io/debian-stable binary/ | sudo tee /etc/apt/sources.list.d/jenkins.list
sudo apt-get update
apt-get install jenkins
Get jenkins Password from:- vi /var/lib/jenkins/secrets/initialAdminPassword
Browse:- eg: 192.168.xx.xx:8080

Resources