I have a working ES docker container running that I run like so
docker run -p 80:9200 -p 9300:9300 --name es-loaded-with-data --privileged=true --restart=always es-loaded-with-data
I loaded up ES with a bunch of test data and wanted to save it in that state so I followed up with
docker commit containerid es-tester
docker save es-tester > es-tester.tar
then when I load it back in the data is all gone... what gives?
docker load < es-tester.tar
If you started from the official ES image, it is using a volume (https://github.com/docker-library/elasticsearch/blob/7d08b8e82fb8ca19745dab75ee32ba5a746ac999/2.1/Dockerfile#L41). Because of this, any data written to that volume will not be committed by Docker.
In order to backup the data, you need to copy the data out of the container: docker cp <container name>:/usr/share/elasticsearch/data <dest>
Related
I am trying to run the frdmrobotics/playground Docker Image like in this tutorial.
And also Connect the Gui via VcXsrv.
I tried to pull the Image but already only got errors.
Also I have no idea how I should connect VcXsrv from inside the Image
The Problem with Pulling was solved by updating Docker for Windows by starting it and letting it search for an Update.
After a Restart I could Pull the image with docker pull frdmrobotics/playground.
For starting a Container of the Image
docker run -it frdmrobotics/playground
works, but from the Tutorial
cd to the desired development folder e.g.
cd c:/rosProgramming
and then use:
docker run -dt --name robot_env --restart unless-stopped -v %cd%:/root/workspace frdmrobotics/playground
That starts the Container which can then be connected to by using:
docker exec -it robot_env bash
For Connecting VcXsrv I found the Info that I had to set some things in the Image:
export DISPLAY=192.168.105.1:0.0
export LIBGL_ALWAYS_INDIRECT=1
The needed IP Address in that command I got via ipconfig, it is the one which is marked with "(WSL)".
After that when I started a Programm it Opened in my VcXsrv Instance
After that you can use:
cd /root/code/ros1
to navigate to the ros1 environment
I able to run Flask API container successfully. But during the app execution it fails and stops the container for some reason.
I do checked container logs and noticed some file missing error is coming. Now I want to debug what file is missing by accessing /bin/bash of stopped container. But it throws an error saying container is not running.
docker exec -it CONTAINER /bin/bash
Is there any workaround to access bash in the STOPPED container?
No, you cannot.
However, it might be useful to either check the logs or specify bash as an entry point when doing a docker run
Checking logs: https://docs.docker.com/config/containers/logging/
docker logs <CONTAINER_NAME>
Shell Entry point: https://docs.docker.com/engine/reference/run/#entrypoint-default-command-to-execute-at-runtime
docker run --name <CONTAINER_NAME> --entrypoint /bin/bash <IMAGE_NAME>
If you container does not have /bin/bash, try
docker run --name <CONTAINER_NAME> --entrypoint /bin/sh <IMAGE_NAME>
You can try to use the docker commit command.
From the docs:
It can be useful to commit a container’s file changes or settings into
a new image. This allows you to debug a container by running an
interactive shell, or to export a working dataset to another server.
Resource with a example:
We can transform a container into a Docker image using the commit
command. All we need to know is the name or the identifier of the
stopped container. (You can get a list of all stopped containers with
docker ps -a).
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS NAMES
0dfd54557799 ubuntu "/bin/bash" 25 seconds ago Exited (1) 4 seconds ago peaceful_feynman
Having the identifier 0dfd54557799 of the stopped container, we can
create a new Docker image. The resulting image will have the same
state as the previously stopped container. At this point, we use
docker run and overwrite the original entrypoint to get a way into the
container.
# Commit the stopped image
docker commit 0dfd54557799 debug/ubuntu
# now we have a new image
docker images list
REPOSITORY TAG IMAGE ID CREATED SIZE
debug/ubuntu <none> cc9db32dcc2d 2 seconds ago 64.3MB
# create a new container from the "broken" image
docker run -it --rm --entrypoint sh debug/ubuntu
# inside of the container we can inspect - for example, the file system
$ ls /app
App.dll
App.pdb
App.deps.json
# CTRL+D to exit the container
# delete the container and the image
docker image rm debug/ubuntu
You can't because this container is dead as well as turned down virtual machine is dead. You can check logs using docker logs command.
docker container ls -aq
docker logs <name_of_your_dead_container>
From the man pages from docker-run:
--entrypoint=""
Overwrite the default ENTRYPOINT of the image
So use something like:
docker run --entrypoint=/usr/bin/sleep 1000 ......
This will start the container and wait 1000 seconds, allowing you to connect and debug.
I monitor my jar using Elastic APM Agent, i run these commands manually :
java -javaagent:../infrastructure/agent/apm-agent.jar \
-Delastic.apm.service_name=server \
-Delastic.apm.server_urls=http://${APM_HOST}:8200 \
-Delastic.apm.application_packages=package.coù \
-jar ./target/server-0.0.1-SNAPSHOT.jar &
Now , i want to pass these parameters using docker run , i create the image and i try with this command to pass these settings , but the application is not starting
docker run --name app -e CATALINA_OPTS='-Dspring.config.location=/usr/local/tomcat/application-recette.properties,/usr/local/tomcat/application.yml'
-e CATALINA_OPTS='-Delastic.apm.service_name=server'
-e CATALINA_OPTS='-Delastic.apm.server_urls=http://10.128.0.4:8200'
-e CATALINA_OPTS='-Delastic.apm.application_packages=package.com'
-d -p 9000:8080 image:v1
any idea to resolve this ?
Thanks
actually there are many reasons why your app not starting depending on how you setup and configured your ELK stack , but for me I did the following and it's working fine :
shipped application.jar and apm-agent.jar via Dockerfile and run them inside container :
FROM openjdk:8-jre-alpine
COPY javaProjects/test-apm/target/test-apm-0.0.1-SNAPSHOT.jar /app.jar
COPY elastic-apm-agent-1.19.0.jar /apm-agent.jar
CMD ["/usr/bin/java","-javaagent:/apm-agent.jar", "-Delastic.apm.service_name=my-cool-service -Delastic.apm.application_packages=main.java -Delastic.apm.server_urls=http://localhost:8200","-jar", "/app.jar"]
create image from this Dockerfile:
docker build -t test-apm:latest ./
run the created image :
docker run --network host -p 8080:8080 test-apm:latest
note my apm-server and ELK-stack was running on my host machine ,
I think if you do the same and make little changes to mach you environments it should work fine ,
From windows, I connected to Postgres Docker container from the local machine. But I can't see the tables that are existed in postgres container. The data is not replicating locally. I followed this tutorial
for running the postgres container on windows.
I managed to create the tables from dump file.
$ docker volume create --name postgres-volume
$ docker run -p 5432:5432 --name postgres_db -e POSTGRES_PASSWORD=password -v postgres-volume:/var/lib/postgresql/data -d postgres
$ docker exec -it <container-id> bash -c "pg_dump -h <source-url> -U postgres -d postgres > /tmp/dump.sql"
$ docker exec -it <container-id> bash -c "psql -f /tmp/dump.sql -U postgres -d postgres"
Any help, appreciated.
Containers
Containers are meant to be an isolated instance of a program/service. They are isolated both from the host and subsequent spawns of the same image. They start off in an isolated island, with nothing in it (that it didn't bring itself). Any data they generate is lost upon their death. They are, also, completely oblivious to any data on the host (for now). But, sometimes, we want their data to be persistent or "inject" our own data each time they start up. Such as your case with PostgreSQL. We want PostgreSQL to have our schema available each time it starts up. And, it would also be great if it retained any changes we made or data we loaded.
Docker Volumes
Enter docker volumes. It is a good method to manage persistent storage for containers. They are meant to be mounted in containers and let them write their data (or read from prior instances) which will not be deleted if the container instance is deleted. Once you create a volume with docker volume create myvolume1, it'll create a directory in /var/lib/docker/volumes/ (on windows it'll be another default. Can be changed). You never have to be aware of the physical directory on your host. You only need be aware of the volume name myvolume1 (or whatever name you choose it to have).
Containers with persistent data (docker volumes)
As we said, containers, by default, are completely isolated from the host. Specifically its filesystem, too. Which means, when a container starts up, it doesn't know what's on the host's filesystem. And, when the container instance is deleted, the data it generated during its life perishes with it.
But, that'll be different if we use docker volumes. Upon a container's start-up, we can mount within it data from "outside". This data can either be the docker volume we spoke of earlier or a specific path we want (such as /home/me/somethingimport which we manage ourselves). The latter isn't a docker volume but works just the same.
Tutorial
The tutorial you linked talks about mounting both a path and a docker volume (in separate examples). This is done with the -v flag when you execute docker run. Because using docker on windows, there is an issue with permissions to the PostgreSQL data directory on the host (which is mounted in the container), they recommend using docker volumes.
This means you'll have to create your schema and load any data you need after you used a docker volume with your instance of PostgreSQL. Subsequent restarts of the container must use the same docker volume.
docker volume create --name postgres-volume
docker run -p 5432:5432 --name postgres_db -e POSTGRES_PASSWORD=password -v postgres-volume:/var/lib/postgresql/data -d postgres
From the tutorial
These are the two important lines. The first creates creates a docker volume and the second starts a fresh PostgreSQL instance. Any changes you make to that instance's data (DML DDL), will be saved in the docker volume postgres-volume. If you've previously spun up a container (for example, PostgreSQL) that uses that volume, it'll find the data just as it was left last time. In other words, what makes the second line a fresh instance is the fact that the docker volume is empty (it was just created). Subsequent instances of PostgreSQL will find the schema+data you loaded previously.
I'm curious if there's a way to see how much disk space a running Windows container is using in addition to the layers that are part of the container's image. Basically, how much the container "grew" since it was created.
In Linux (Or Linux containers running in a HyperV), this would be docker ps -s, however that command isn't implemented on Windows containers. I also tried docker system df -v but also, not implemented. Perhaps there's a hacky way by looking at a certain directly on disk or something?
I checked on Windows 10 1809 running non-HyperV (process isolation) containers, I'm pretty sure its the same for Windows Server containers.
The data seems to be kept in:
C:\ProgramData\Docker\windowsfilter\{ContainerId}
There's a direct reference to the folder in docker inspect {Id} under GraphDriver\Data\dir.
The folder contains file sandbox.vhdx which appears to be the "writable layer" of each container.
I wasn't able to open it and view the filesystem, but if I write some data inside the container I can force the file to grow:
docker exec <Id> powershell get-childitem c:\ -recurse `> c:\windows\temp\test.txt
The layer persists when the container is stopped/restarted, and the folder is removed when the container is rmed.
While researching I saw an open PR in moby to improve cleanup of this folder.
I'm using docker for windows (docker desktop 2.0.0.3) and docker ps -s is actually implemented.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES SIZE
81acb264aa0f httpd "httpd-foreground" 6 minutes ago Up 6 minutes 80/tcp httpd 2B (virtual 132MB)
Docker for windows runs on a MobyLinuxVM. You can access the VM and the docker directories:
docker run --privileged -it -v /var/run/docker.sock:/var/run/docker.sock jongallant/ubuntu-docker-client
root#8b58d2fbe186:/# docker run --net=host --ipc=host --uts=host --pid=host –it --security-opt=seccomp=unconfined --privileged --rm -v /:/host alpine /bin/sh
root#8b58d2fbe186:/# chroot /host
Now you can access the docker folders in /var/lib/docker as on linux and check the sizes.