I have docker-compose.yml file and I start a container with DB via
docker-compose up -d db command.
I need to execute script by host machine that, briefly speaking, export dump to db in container.
So, now it looks like:
docker-compose up -d db
./script.sh
But I want to combine these two commands into one.
My question is "Is it possible?"
I found out that Docker Compose doesn't support this feature.
I know that I can create another script with these commands in it, but I want to leave only
docker-compose up -d db
UPD: I would like to mention that I am using mcr.microsoft.com/mssql/server:2017-latest image
Also, have to say one more time that I need to execute script exactly on host machine
You can't use the Docker tools to execute commands on the host system. A general design point around Docker is that containers shouldn't be able to affect the host.
Nothing stops you from writing your own shell script that runs on the host and does the steps you need:
#!/bin/sh
docker-compose -d up
./wait-for.sh localhost 1433
./script.sh
(The wait-for.sh script is the same as described in the answers to Docker Compose wait for container X before starting Y that don't depend on Docker health checks.)
For your use case it may be possible to run the data importer in a separate container. A typical setup could look like this; note that the importer will run every time you run docker-compose up. You may want to actually build this into a separate image.
version: '3.8'
services:
db: { same: as you have currently }
importer:
image: mcr.microsoft.com/mssql/server:2017-latest
entrypoint: ./wait-for.sh db 1433 -- ./script.sh
workdir: /import
volumes: [.:/import]
The open-source database containers also generally support putting scripts in /docker-entrypoint-initdb.d that get executed the first time the container is launched, but the SQL Server image doesn't seem to support this; questions like How can I restore an SQL Server database when starting the Docker container? have a complicated setup to replicate this behavior.
Related
I installed oracle db version 19c in my docker environment with the following command:
docker run --name oracle19c --network host -p 1521:1521 -p 5500:5500
-v /opt/oracle:/u01/oracle oracle/database:19.3.0-ee
Then I connect to it with:
docker exec -ti oracle19c sqlplus system/oracle#orclpdb1
SQL>
Then I setup my database. Afterwards I want to import dummy data from a tbl file so I exit sqlplus and I use the command:
sqlldr userid=system control=/home/userhere/sql_loader/control.ctl log=sf1customer.log
and get sqlldr: not found
I don't have much experience with Docker, but my research leads to me to believe that SQL *Loader does not come with the docker image. However, I do not know how to extend the image or where exactly I would call SQL *Loader even if I did. I am on a Ubuntu server and any help would be appreciated.
SQL*Loader is in the image - but the docker container is separate from your host OS, so ubuntu doesn't know any of the files or commands inside it exist. Any commands inside the container should be run as docker commands. If you try this, it should connect to your running container and print the help page:
docker exec -ti oracle19c sqlldr
Since you're running this command on the docker container, sqlldr doesn't have access to any of your host OS's files unless you specifically granted them to the container. But good news - when you started the database with docker run, that's what the -v /opt/oracle:/u01/oracle part of the command did - it mapped /opt/oracle on your Ubuntu filesystem to /u01/oracle in the docker container. So any files that you put in /opt/oracle will be available in the container under /u01/oracle.
So you'll need to do a couple things:
Your control.ctl file, log file, and whatever data file you're using need to be accessible to the container. Either move them to /opt/oracle or shutdown your database container and restart it with something like -v /home/userhere/sql_loader:/u01/oracle in the command.
You might also need to edit your control.ctl file to make sure that it doesn't reference any file paths on your host OS. Either use relative paths (./myfile.csv) or absolute paths with the container's filesystem (/u01/oracle/myfile.csv)
Now you should be able to run sqlldr on the container, and it should be able to access your data files.
docker exec -ti oracle19c sqlldr userid=system control=/u01/oracle/control.ctl log=/u01/oracle/sf1customer.log
Edit: Oh, I should mention - as an alternative, if you download and install the Oracle Instant Client in Ubuntu, you could run sqlldr locally in Ubuntu, and connect to the docker container over the network as a "remote" database:
sqlldr system#localhost:1521/orclpdb1 control=/home/userhere/sql_loader/control.ctl log=sf1customer.log
That way you don't have to move your files anywhere.
From windows, I connected to Postgres Docker container from the local machine. But I can't see the tables that are existed in postgres container. The data is not replicating locally. I followed this tutorial
for running the postgres container on windows.
I managed to create the tables from dump file.
$ docker volume create --name postgres-volume
$ docker run -p 5432:5432 --name postgres_db -e POSTGRES_PASSWORD=password -v postgres-volume:/var/lib/postgresql/data -d postgres
$ docker exec -it <container-id> bash -c "pg_dump -h <source-url> -U postgres -d postgres > /tmp/dump.sql"
$ docker exec -it <container-id> bash -c "psql -f /tmp/dump.sql -U postgres -d postgres"
Any help, appreciated.
Containers
Containers are meant to be an isolated instance of a program/service. They are isolated both from the host and subsequent spawns of the same image. They start off in an isolated island, with nothing in it (that it didn't bring itself). Any data they generate is lost upon their death. They are, also, completely oblivious to any data on the host (for now). But, sometimes, we want their data to be persistent or "inject" our own data each time they start up. Such as your case with PostgreSQL. We want PostgreSQL to have our schema available each time it starts up. And, it would also be great if it retained any changes we made or data we loaded.
Docker Volumes
Enter docker volumes. It is a good method to manage persistent storage for containers. They are meant to be mounted in containers and let them write their data (or read from prior instances) which will not be deleted if the container instance is deleted. Once you create a volume with docker volume create myvolume1, it'll create a directory in /var/lib/docker/volumes/ (on windows it'll be another default. Can be changed). You never have to be aware of the physical directory on your host. You only need be aware of the volume name myvolume1 (or whatever name you choose it to have).
Containers with persistent data (docker volumes)
As we said, containers, by default, are completely isolated from the host. Specifically its filesystem, too. Which means, when a container starts up, it doesn't know what's on the host's filesystem. And, when the container instance is deleted, the data it generated during its life perishes with it.
But, that'll be different if we use docker volumes. Upon a container's start-up, we can mount within it data from "outside". This data can either be the docker volume we spoke of earlier or a specific path we want (such as /home/me/somethingimport which we manage ourselves). The latter isn't a docker volume but works just the same.
Tutorial
The tutorial you linked talks about mounting both a path and a docker volume (in separate examples). This is done with the -v flag when you execute docker run. Because using docker on windows, there is an issue with permissions to the PostgreSQL data directory on the host (which is mounted in the container), they recommend using docker volumes.
This means you'll have to create your schema and load any data you need after you used a docker volume with your instance of PostgreSQL. Subsequent restarts of the container must use the same docker volume.
docker volume create --name postgres-volume
docker run -p 5432:5432 --name postgres_db -e POSTGRES_PASSWORD=password -v postgres-volume:/var/lib/postgresql/data -d postgres
From the tutorial
These are the two important lines. The first creates creates a docker volume and the second starts a fresh PostgreSQL instance. Any changes you make to that instance's data (DML DDL), will be saved in the docker volume postgres-volume. If you've previously spun up a container (for example, PostgreSQL) that uses that volume, it'll find the data just as it was left last time. In other words, what makes the second line a fresh instance is the fact that the docker volume is empty (it was just created). Subsequent instances of PostgreSQL will find the schema+data you loaded previously.
I migrated from Linux to Windows and tried to setup a postgres container with a mounted directory (copied from my Linux install) containing the database.
This does not work.
Windows mounts are always owned by root
Postgres does not run under root
How to get this unholy combination to work?
You don't provide much details so it is difficult to tell what actually went wrong. However there is a known issue with Postgres setup on Windows Docker using a windows mount for database data files. In that case, running docker logs will show something along the following lines
waiting for server to start....FATAL: data directory "/var/lib/postgresql/data" has wrong ownership
HINT: The server must be started by the user that owns the data directory.
stopped waiting
pg_ctl: could not start server
Unfortunately there is no way to overcome this issue so you cannot use Windows mount, see Postgres Data has wrong ownership. You may use docker volumes in order to make database data indipendent from docker postgres container, using the following commands
docker create -v /var/lib/postgresql/data --name PostgresData alpine
docker run -p 5432:5432 --name yourPostgres -e POSTGRES_PASSWORD=yourPassword -d --volumes-from PostgresData postgres
You may find a more thoroughful explanation at Setup Postgresql on Windows with Docker
I want to build a cassandra cluster with docker. The documentation already tells you how to this so this is not the problem I have.
However I am currently using Docker on Windows 10 and obviously it cannot execute the nested command in docker run --name some-cassandra2 -d -e CASSANDRA_SEEDS="$(docker inspect --format='{{ .NetworkSettings.IPAddress }}' some-cassandra)" cassandra:tag which results in an empty seed list for the container.
How can I nest a command like this in Windows or - if this is not possible - get a workaround for this?
I managed to fix it thanks to a docker-compose.yml by Jason Giedymin. It should work in v1 as well as v2 of docker-compose. By doing it this way you just let docker do the linking from the get go and tell cassandras about other seeds with the environment variable the container already gives you.
The sleep 30 part is pretty smart as well as it makes sure that the second container doesn't try to connect on a container that isn't fully up yet.
One thing I would recommend though, is using external_links instead of links. This way other containers don't rely on all of the cassandra containers to be up to start/work. This would defeat the purpose of a distributed database.
I still don't know how to nest Windows cmd commands into each other so I would still be thankful for some tips.
Often I come across this situation:
I have an existing docker container, running a certain service, usually set up from a Dockerfile from Github, etc., usually based on Ubuntu
I am able to run commands inside this container (with docker exec or by setting an entrypoint), including sh
Interactive commands like vi, nano, aptitude or mc don't work, because of the buggy terminal of Docker Toolbox - with errors ranging from defective arrow keys over garbled characters to plain crashes.
Now the question:
Can I run anything inside my container to connect to a machine with a proper terminal? For example I could SSH into the docker host, so maybe I can run something there that the container can connect to?
I tried mosh, but it seems the mosh client does not run a shell by itself, but instead tries to forward to sshd, which the container doesn't have.
Docker is used to create light weight containers that can run a service with as minimal resources as possible. In addition, docker does not limit what code, apps or utilities you would want to run. That being said, if you are trying to connect to the container as you would to other linux servers, via ssh, you would need to be sure that the docker instance contains and is running an ssh server such as openssh-server and that you expose the port, normally port 22, when you execute the 'docker run' command.