I migrated from Linux to Windows and tried to setup a postgres container with a mounted directory (copied from my Linux install) containing the database.
This does not work.
Windows mounts are always owned by root
Postgres does not run under root
How to get this unholy combination to work?
You don't provide much details so it is difficult to tell what actually went wrong. However there is a known issue with Postgres setup on Windows Docker using a windows mount for database data files. In that case, running docker logs will show something along the following lines
waiting for server to start....FATAL: data directory "/var/lib/postgresql/data" has wrong ownership
HINT: The server must be started by the user that owns the data directory.
stopped waiting
pg_ctl: could not start server
Unfortunately there is no way to overcome this issue so you cannot use Windows mount, see Postgres Data has wrong ownership. You may use docker volumes in order to make database data indipendent from docker postgres container, using the following commands
docker create -v /var/lib/postgresql/data --name PostgresData alpine
docker run -p 5432:5432 --name yourPostgres -e POSTGRES_PASSWORD=yourPassword -d --volumes-from PostgresData postgres
You may find a more thoroughful explanation at Setup Postgresql on Windows with Docker
Related
I installed oracle db version 19c in my docker environment with the following command:
docker run --name oracle19c --network host -p 1521:1521 -p 5500:5500
-v /opt/oracle:/u01/oracle oracle/database:19.3.0-ee
Then I connect to it with:
docker exec -ti oracle19c sqlplus system/oracle#orclpdb1
SQL>
Then I setup my database. Afterwards I want to import dummy data from a tbl file so I exit sqlplus and I use the command:
sqlldr userid=system control=/home/userhere/sql_loader/control.ctl log=sf1customer.log
and get sqlldr: not found
I don't have much experience with Docker, but my research leads to me to believe that SQL *Loader does not come with the docker image. However, I do not know how to extend the image or where exactly I would call SQL *Loader even if I did. I am on a Ubuntu server and any help would be appreciated.
SQL*Loader is in the image - but the docker container is separate from your host OS, so ubuntu doesn't know any of the files or commands inside it exist. Any commands inside the container should be run as docker commands. If you try this, it should connect to your running container and print the help page:
docker exec -ti oracle19c sqlldr
Since you're running this command on the docker container, sqlldr doesn't have access to any of your host OS's files unless you specifically granted them to the container. But good news - when you started the database with docker run, that's what the -v /opt/oracle:/u01/oracle part of the command did - it mapped /opt/oracle on your Ubuntu filesystem to /u01/oracle in the docker container. So any files that you put in /opt/oracle will be available in the container under /u01/oracle.
So you'll need to do a couple things:
Your control.ctl file, log file, and whatever data file you're using need to be accessible to the container. Either move them to /opt/oracle or shutdown your database container and restart it with something like -v /home/userhere/sql_loader:/u01/oracle in the command.
You might also need to edit your control.ctl file to make sure that it doesn't reference any file paths on your host OS. Either use relative paths (./myfile.csv) or absolute paths with the container's filesystem (/u01/oracle/myfile.csv)
Now you should be able to run sqlldr on the container, and it should be able to access your data files.
docker exec -ti oracle19c sqlldr userid=system control=/u01/oracle/control.ctl log=/u01/oracle/sf1customer.log
Edit: Oh, I should mention - as an alternative, if you download and install the Oracle Instant Client in Ubuntu, you could run sqlldr locally in Ubuntu, and connect to the docker container over the network as a "remote" database:
sqlldr system#localhost:1521/orclpdb1 control=/home/userhere/sql_loader/control.ctl log=sf1customer.log
That way you don't have to move your files anywhere.
From windows, I connected to Postgres Docker container from the local machine. But I can't see the tables that are existed in postgres container. The data is not replicating locally. I followed this tutorial
for running the postgres container on windows.
I managed to create the tables from dump file.
$ docker volume create --name postgres-volume
$ docker run -p 5432:5432 --name postgres_db -e POSTGRES_PASSWORD=password -v postgres-volume:/var/lib/postgresql/data -d postgres
$ docker exec -it <container-id> bash -c "pg_dump -h <source-url> -U postgres -d postgres > /tmp/dump.sql"
$ docker exec -it <container-id> bash -c "psql -f /tmp/dump.sql -U postgres -d postgres"
Any help, appreciated.
Containers
Containers are meant to be an isolated instance of a program/service. They are isolated both from the host and subsequent spawns of the same image. They start off in an isolated island, with nothing in it (that it didn't bring itself). Any data they generate is lost upon their death. They are, also, completely oblivious to any data on the host (for now). But, sometimes, we want their data to be persistent or "inject" our own data each time they start up. Such as your case with PostgreSQL. We want PostgreSQL to have our schema available each time it starts up. And, it would also be great if it retained any changes we made or data we loaded.
Docker Volumes
Enter docker volumes. It is a good method to manage persistent storage for containers. They are meant to be mounted in containers and let them write their data (or read from prior instances) which will not be deleted if the container instance is deleted. Once you create a volume with docker volume create myvolume1, it'll create a directory in /var/lib/docker/volumes/ (on windows it'll be another default. Can be changed). You never have to be aware of the physical directory on your host. You only need be aware of the volume name myvolume1 (or whatever name you choose it to have).
Containers with persistent data (docker volumes)
As we said, containers, by default, are completely isolated from the host. Specifically its filesystem, too. Which means, when a container starts up, it doesn't know what's on the host's filesystem. And, when the container instance is deleted, the data it generated during its life perishes with it.
But, that'll be different if we use docker volumes. Upon a container's start-up, we can mount within it data from "outside". This data can either be the docker volume we spoke of earlier or a specific path we want (such as /home/me/somethingimport which we manage ourselves). The latter isn't a docker volume but works just the same.
Tutorial
The tutorial you linked talks about mounting both a path and a docker volume (in separate examples). This is done with the -v flag when you execute docker run. Because using docker on windows, there is an issue with permissions to the PostgreSQL data directory on the host (which is mounted in the container), they recommend using docker volumes.
This means you'll have to create your schema and load any data you need after you used a docker volume with your instance of PostgreSQL. Subsequent restarts of the container must use the same docker volume.
docker volume create --name postgres-volume
docker run -p 5432:5432 --name postgres_db -e POSTGRES_PASSWORD=password -v postgres-volume:/var/lib/postgresql/data -d postgres
From the tutorial
These are the two important lines. The first creates creates a docker volume and the second starts a fresh PostgreSQL instance. Any changes you make to that instance's data (DML DDL), will be saved in the docker volume postgres-volume. If you've previously spun up a container (for example, PostgreSQL) that uses that volume, it'll find the data just as it was left last time. In other words, what makes the second line a fresh instance is the fact that the docker volume is empty (it was just created). Subsequent instances of PostgreSQL will find the schema+data you loaded previously.
I have a similar issue to the following link, but in powershell as I am running a clickhouse docker container in windows 10.
Data directory permissions on host for Clickhouse installation via docker
My setup is run as such:
docker run -d -p 8124:8124 --name my_database --ulimit nofile=262144:262144 --volume=E:/:/var/lib/clickhouse yandex/clickhouse-server
E drive is one of the drives on my windows computer.
I cannot seem to access /var/lib/clickhouse/data when running a mergetree table creation. It seems that clickhouse client is not being given adequate permissions to reach this file system. The error looks as such:
Access to file denied: /var/lib/clickhouse/data/default/test_mergetree/tmp_insert_20150731_20150731_8_8_0
Since I am in powershell I am unsure how I might approach solving this. I am attempting to access the file system to give powershell permissions:
Something like this
ICACLS "var/lib/clickhouse/data" /setowner "administrator"
But then since clickhouse is dockerized it seems I cannot find the path:
The system cannot find the path specified.
Would I have to run docker compose? Or am I approaching this the wrong way?
ATTEMPT 1
I've tried running the following:
docker run --rm -i --entrypoint /bin/sh yandex/clickhouse-server -c id clickhouse
#got back:
uid=0(root) gid=0(root) groups=0(root)
#went into the system and ran
docker exec -it container-id bash
chown -R 0:0 /var/lib/clickhouse
#got back
chown: cannot read directory '/var/lib/clickhouse/System Volume Information': Permission denied
You should run docker and clickhouse in Linux instead of Windows.
Turns out this is an issue which has not been repaired in windows docker desktop:
https://github.com/docker/for-win/issues/39
Volume mounts are necesarry. But I got around it by changing the disk image to the target host drive. Under settings -> advanced -> change the virtual hard disk image to the drive you want and you can write within that drive. Note you still won't have access to raw data.
Still an issue but the practical solution is to move to Docker Volumes.
With bind mount there is a problem with wsl2
I am using docker on windows. With the use of kitematic, I have created an ubuntu container. This ubuntu image has postgresql installed on it.
Wondering if there is any possibility to access the postgres configuration files available in the container from the host (windows machine)?
Where exactly does the container store its file system on the host machine?
I hope it would be part of image file with format VMDK.
Please correct me if I'm wrong.
Wondering if there is any possibility to access the postgres configuration files available in the container from the host (windows machine)
That is not how Docker would allow you to modify a file in a container.
For that, you should mount a host (Windows) folder when starting (docker run -v) your container.
See "Mount a host directory as a data volume"
docker run -d -P --name web -v /c/Users/<myACcount>/src/webapp:/opt/webapp training/webapp python app.py
Issue 247 mentions ~/Library/Application Support/Kitematic for App data, and ~/Kitematic "for easy access to volume data".
I have an issue with running postgres container with set up volumes for data folder on my Mac OS machine.
I tried to run it such like this:
docker run \
--name my-postgres \
-e POSTGRES_USER=admin \
-e POSTGRES_PASSWORD=password \
-e POSTGRES_DB=some_db_dev \
-v $PG_LOCAL_DATA:/var/lib/postgresql/data \
-d postgres:9.5.1
Every time I got the following result in logs:
* Starting PostgreSQL
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are enabled.
fixing permissions on existing directory /var/lib/postgresql/data ... ok
initdb: could not create directory "/var/lib/postgresql/data/pg_xlog": Permission denied
initdb: removing contents of data directory "/var/lib/postgresql/data"
Versions of docker, docker-machine, virtualbox and boot2docker are:
docker-machine version 0.6.0, build e27fb87
Docker version 1.10.2, build c3959b1
VirtualBox Version 5.0.16 r105871
boot2docker 1.10.3
I saw many publications about this topic but the most of them are outdated. I had tried do the similar solution as for mysql but it did not help.
Maybe somebody can updated me: does some solution exist to run postgres container with data volumes through docker-machine?
Thanks!
If you are running docker-machine on a Mac, at this time, you cannot mount to a directory that is not part of your local user space (/Users/<user>/) without extra configuration.
This is because on the Mac, Docker makes a bind mount automatically with the home ~ directory. Remember that since Docker is being hosted in a VM that isn't your local Mac OS, any volume mounts are relative to the host VM - not your machine. That means by default, Docker cannot see your Mac's directories since it is being hosted on a separate VM from your Mac OS.
Mac OS => Linux Virtual Machine => Docker
^------------------^
Docker Can See VM
^-----------------X----------------^
Docker Can't See Here
If you open VirtualBox, you can create other mounts (i.e. shared folders) to your local machine to the host VM and then access them that way.
See this issue for specifics on the topic: https://github.com/docker/machine/issues/1826
I believe the Docker team is adding these capabilities in upcoming releases (especially since a native Mac version is in short works).
You should use docker named volumes instead of folders on your local file system.
Try creating a volume:
docker volume create my_vol
Then mount the data directory in your above command:
docker run \
--name my-postgres \
-e POSTGRES_USER=admin \
-e POSTGRES_PASSWORD=password \
-e POSTGRES_DB=some_db_dev \
-v my_vol:/var/lib/postgresql/data \
-d postgres:9.5.1
Checkout my blog post for a whole postgres, node ts docker setup for both dev and prod: https://yzia2000.github.io/blog/2021/01/15/docker-postgres-node.html
For more on docker volumes: https://docs.docker.com/storage/volumes/