After I've used rethinkdb restore, where does rethinkdb import that data / access that data from?
I've tried searching for this answer, but my choice in keywords to use must be inadequate.
I want to use this directory as a shared volume for my docker container so the docker container is "separate" from the data but also has w/r access to the data.
It imports into the data directory. Which is, by default, folder rethinkdb_data in working directory where you execute rethinkdb. Unless you specify a different with -d.
$ rethinkdb -h
Running 'rethinkdb' will create a new data directory or
use an existing one, and serve as a RethinkDB cluster node. File
path options: -d [ --directory ] path specify
directory to store data and
metadata
If you are using Docker, and you didn't change the data directory with -d, then it's probably is store in 'rethinkdb_datain yourWORKDIR` instruction in Dockerfile. You can mount it outside for persistent.
Take this image as example: https://github.com/stuartpb/rethinkdb-dockerfiles/blob/master/trusty/2.1.4/Dockerfile, it's official RethinkDB docker https://hub.docker.com/_/rethinkdb/
We can see that it has instruction:
WORKDIR /data
And it runs with:
CMD ["rethinkdb", "--bind", "all"]
Therefore, it store data in /data/rethinkdb_data. You can either mount the whole /data or only /data/rethinkdb_data/
Related
I have a container that I start like
docker run -it --mount type=bind,source=/path/to/my/data,target=/staging -v myvol:/myvol buildandoid bash -l
It has two mounts, one bind mount that I use to get data into the container, and one named volume that I use to persist data. The container is used as a reproducable android (AOSP) build environment, so not your typical web service.
I would like to access the files on myvol from the Windows host. If I use an absolute path for the mount, e.g. -v /c/some/path:/myvol, I can do that, but I believe docker creates copies of all the files and keeps them in sync. I really want to avoid creating these files on the windows side (for space reasons, as it is several GB, and performance reasons, since NTFS doesn't seem to handle many little files well).
Can I somehow "mount" a container directory or a named volume on the host? So the exact reverse of a bind mount. I think alternatively I could install samba or sshd in the container, and use that, but maybe there is something built into docker / VirtualBox to achive this.
Use bind mounts.
https://docs.docker.com/engine/admin/volumes/bind-mounts/
By contrast, when you use a volume, a new directory is created within Docker’s storage directory on the host machine, and Docker manages that directory’s contents.
I want to back my DynamoDB local server. I have install DynamoDB server in Linux machine. Some sites are refer to create a BASH file in Linux os and connect to S3 bucket, but in local machine we don't have S3 bucket.
So i am stuck with my work, Please help me Thanks
You need to find the database file created by DynamoDb local. From the docs:
-dbPath value — The directory where DynamoDB will write its database file. If you do not specify this option, the file will be written to
the current directory. Note that you cannot specify both -dbPath and
-inMemory at once.
The file name would be of the form youraccesskeyid_region.db. If you used the -sharedDb option, the file name would be shared-local-instance.db
By default, the file is created in the directory from which you ran dynamodb local. To restore you'll have to the copy the same file and while running dynamodb, specify the same dbPath.
I have downloaded a postgresql docker image and at the moment editing some config files. The problem that I have is that whenever I edit the config files and commit the docker image (save it as a new one), it never saves anything. The image is still the same as the one I downloaded.
Image I am using:
https://hub.docker.com/_/postgres/
I believe this is the latest docker file.
https://github.com/docker-library/postgres/blob/a00e979002aaa80840d58a5f8cc541342e06788f/9.6/Dockerfile
This is what I did:
1. Run the postgresql docker container
2. Enter the terminal of the container. docker exec -i -t {id of container} /bin/bash
3. Edit some config files.
4. Exit the container.
5. Commit the changes by using docker commit {containerid} {new name}
6. Stop the old container and start the new one.
The new container is created. If I start the new container with the new image and check the config files I edited, my changes are not there. No changes were committed.
What am I doing wrong here?
The Docker file contains a volume declaration
https://github.com/docker-library/postgres/blob/a00e979002aaa80840d58a5f8cc541342e06788f/9.6/Dockerfile#L52
VOLUME /var/lib/postgresql/data
All file edits under this path will not be saved in a Docker image commit. These data files are deliberately excluded as they define your container's state. Images on the other hand are designed to create new containers, so VOLUMEs are a mechanism to keep state separate.
It would appear that you're attempting to use Docker images as a mechanism for DB backup and recovery. This is ill-advised as the docker file system is less performant compared to the native file system typically exposed to a volume.
As Mark rightfully points out, your data is left behind because of the volume definition, and it should not be altered for the general production use.
If you have a legitimate reason to keep the data within the image produced, you may move the postgres data from the volume by adding the following to your dockerfile:
ENV PGDATA /var/lib/postgresql/my_data
RUN mkdir -p $PGDATA
I've been using this technique to produce db images for testing to speedup the feedback loop.
Several articles have been extremely helpful in understanding Docker's volume and data management. These two in particular are excellent:
http://container-solutions.com/understanding-volumes-docker/
http://www.alexecollins.com/docker-persistence/
However, I am not sure if what I am looking for is discussed. Here is my understanding:
When running docker run -v /host/something:/container/something the host files will overlay (but not overwrite) the container files at the specified location. The container will no longer have access to the location's previous files, but instead only have access to the host files at that location.
When defining a VOLUME in a Dockerfile, other containers may share the contents created by the image/container.
The host may also view/modify a Dockerfile volume, but only after discovering the true mountpoint using docker inspect. (usually somewhere like /var/lib/docker/vfs/dir/cde167197ccc3e138a14f1a4f7c....). However, this is hairy when Docker has to run inside a Virtualbox VM.
How can I reverse the overlay so that when mounting a volume, the container files take precedence over my host files?
I want to specify a mountpoint where I can easily access the container filesystem. I understand I can use a data container for this, or I can use docker inspect to find the mountpoint, but neither solution is a good solution in this case.
The docker 1.10+ way of sharing files would be through a volume, as in docker volume create.
That means that you can use a data volume directly (you don't need a container dedicated to a data volume).
That way, you can share and mount that volume in a container which will then keep its content in said volume.
That is more in line with how a container is working: isolating memory, cpu and filesystem from the host: that is why you cannot "mount a volume and have the container's files take precedence over the host file": that would break that container isolation and expose to the host its content.
Begin your container's script with copying files from a read-only mount bind reflecting the host files to a work location in the container. End the script with copying necessary results from the container's work location back to the host using either the same or different mount point.
Alternatively to the end-of-the script command, run the container without automatically removing it at the end, then run docker cp CONTAINER_NAME:CONTAINER_DIR HOST_DIR, then docker rm CONTAINER_NAME.
Alternatively to copying results back to the host, keep them in a separate "named" volume, provided that the container had it mounted (type=volume,src=datavol,dst=CONTAINER_DIR/work). Use the named volume with other docker run commands to retrieve or use the results.
The input files may be modified in the host during development between the repeated runs of the container. Avoid shadowing them with the frozen files in the named volume. Beginning the container script with copying the input files from the host may help.
Using a named volume helps running the container read-only. (One may still need --tmpfs /tmp for temporary files or --tmpfs /tmp:exec if some container commands create and run executable code in the temporary location).
I am using the following base docker file:
https://github.com/wnameless/docker-oracle-xe-11g/blob/master/Dockerfile
I read a bit on how to setup a data Volumne from this SO question and this blog, but not sure how to fit the pieces together.
In short, I would like to manage the oracle data in a data only Docker image, how to do it ?
I've realized volumes mount for db data.
Here is my fork:
Reduce size of image from 3.8G to 825Mb
Database initialization moved out of the image build phase. Now database initializes at the containeer startup with no database files mounted
media reuse support outside of container. Added graceful shutdown on containeer stop
Removed sshd
You may check here:
https://registry.hub.docker.com/u/sath89/oracle-xe-11g
https://github.com/MaksymBilenko/docker-oracle-xe-11g
I tried mapping the datafiles and fast recovery directories in my oracle xe container. However, I changed my mind after losing the files ... so you should be very careful about this approach and understand how docker manages those spaces under all operations.
I found, for example, that if you clean out old containers, the contents of the mapped directories will be deleted even if they are mapped to something outside the docker system area (/var/lib/docker). You can avoid this by keeping containers and starting them up again. But, if you want to version and make a new image... you have to backup those files.
Oracle also id's the files themselves (checksum or inode # or something) and complains about them on startup.... I did not investigate the extent of that issue or even if there is indeed any issue there.
I've opted to not map any of those files/dirs and plan to use datapump or whatever to get the data out until I get a better handle on all that can happen.
So I update the data and version the image... pushing to to the repo for safe-keeping
In general:
# Start data container
docker run -d -v /dbdata --name dbdata -it ubuntu
# Put oracale data in /dbdata some how
# Start container with stabase and look for data at /dbdata
docker run -d --volumes-from dbdata --name db -it ubuntu