I'm using Docker, and by default runs the RethinkDB process with only the --bind all argument.
To join the cluster requires the use of the --join argument, or a configuration file. To do this with Docker would now require a new Docker image to be made for this purpose.
How can I join a cluster using ReQL directly (thus eliminating the need to create a new Docker image). I could simply connect to the lone instance, add a row to a system table (like server_status), and the instance would connect to the newly entered external instance.
I could repeat this process for each node in the cluster. And makes things simpler for when nodes come up and go down, otherwise I would have to restart each RethinkDB process.
In Docker, we can override the CMD which invokes RethinkDB process with a custom command for customization the executing RethinkDB process. Instead of simply call docker run rethinkdb, we can pass an the rethinkdb command for joining to the first node.
Example using the official RethinkDB docker
docker run --rm -it -p 9080:8080 rethinkdb
Then we can inspect its IP address, assume it's 172.17.0.2, we can start second one:
docker run --rm -it -p 9081:8080 rethinkdb rethinkdb --join 172.17.0.2:29015 --bind all
Visit RethinkDB dashboard and you should see two nodes now.
Related
I have docker-compose.yml file and I start a container with DB via
docker-compose up -d db command.
I need to execute script by host machine that, briefly speaking, export dump to db in container.
So, now it looks like:
docker-compose up -d db
./script.sh
But I want to combine these two commands into one.
My question is "Is it possible?"
I found out that Docker Compose doesn't support this feature.
I know that I can create another script with these commands in it, but I want to leave only
docker-compose up -d db
UPD: I would like to mention that I am using mcr.microsoft.com/mssql/server:2017-latest image
Also, have to say one more time that I need to execute script exactly on host machine
You can't use the Docker tools to execute commands on the host system. A general design point around Docker is that containers shouldn't be able to affect the host.
Nothing stops you from writing your own shell script that runs on the host and does the steps you need:
#!/bin/sh
docker-compose -d up
./wait-for.sh localhost 1433
./script.sh
(The wait-for.sh script is the same as described in the answers to Docker Compose wait for container X before starting Y that don't depend on Docker health checks.)
For your use case it may be possible to run the data importer in a separate container. A typical setup could look like this; note that the importer will run every time you run docker-compose up. You may want to actually build this into a separate image.
version: '3.8'
services:
db: { same: as you have currently }
importer:
image: mcr.microsoft.com/mssql/server:2017-latest
entrypoint: ./wait-for.sh db 1433 -- ./script.sh
workdir: /import
volumes: [.:/import]
The open-source database containers also generally support putting scripts in /docker-entrypoint-initdb.d that get executed the first time the container is launched, but the SQL Server image doesn't seem to support this; questions like How can I restore an SQL Server database when starting the Docker container? have a complicated setup to replicate this behavior.
From windows, I connected to Postgres Docker container from the local machine. But I can't see the tables that are existed in postgres container. The data is not replicating locally. I followed this tutorial
for running the postgres container on windows.
I managed to create the tables from dump file.
$ docker volume create --name postgres-volume
$ docker run -p 5432:5432 --name postgres_db -e POSTGRES_PASSWORD=password -v postgres-volume:/var/lib/postgresql/data -d postgres
$ docker exec -it <container-id> bash -c "pg_dump -h <source-url> -U postgres -d postgres > /tmp/dump.sql"
$ docker exec -it <container-id> bash -c "psql -f /tmp/dump.sql -U postgres -d postgres"
Any help, appreciated.
Containers
Containers are meant to be an isolated instance of a program/service. They are isolated both from the host and subsequent spawns of the same image. They start off in an isolated island, with nothing in it (that it didn't bring itself). Any data they generate is lost upon their death. They are, also, completely oblivious to any data on the host (for now). But, sometimes, we want their data to be persistent or "inject" our own data each time they start up. Such as your case with PostgreSQL. We want PostgreSQL to have our schema available each time it starts up. And, it would also be great if it retained any changes we made or data we loaded.
Docker Volumes
Enter docker volumes. It is a good method to manage persistent storage for containers. They are meant to be mounted in containers and let them write their data (or read from prior instances) which will not be deleted if the container instance is deleted. Once you create a volume with docker volume create myvolume1, it'll create a directory in /var/lib/docker/volumes/ (on windows it'll be another default. Can be changed). You never have to be aware of the physical directory on your host. You only need be aware of the volume name myvolume1 (or whatever name you choose it to have).
Containers with persistent data (docker volumes)
As we said, containers, by default, are completely isolated from the host. Specifically its filesystem, too. Which means, when a container starts up, it doesn't know what's on the host's filesystem. And, when the container instance is deleted, the data it generated during its life perishes with it.
But, that'll be different if we use docker volumes. Upon a container's start-up, we can mount within it data from "outside". This data can either be the docker volume we spoke of earlier or a specific path we want (such as /home/me/somethingimport which we manage ourselves). The latter isn't a docker volume but works just the same.
Tutorial
The tutorial you linked talks about mounting both a path and a docker volume (in separate examples). This is done with the -v flag when you execute docker run. Because using docker on windows, there is an issue with permissions to the PostgreSQL data directory on the host (which is mounted in the container), they recommend using docker volumes.
This means you'll have to create your schema and load any data you need after you used a docker volume with your instance of PostgreSQL. Subsequent restarts of the container must use the same docker volume.
docker volume create --name postgres-volume
docker run -p 5432:5432 --name postgres_db -e POSTGRES_PASSWORD=password -v postgres-volume:/var/lib/postgresql/data -d postgres
From the tutorial
These are the two important lines. The first creates creates a docker volume and the second starts a fresh PostgreSQL instance. Any changes you make to that instance's data (DML DDL), will be saved in the docker volume postgres-volume. If you've previously spun up a container (for example, PostgreSQL) that uses that volume, it'll find the data just as it was left last time. In other words, what makes the second line a fresh instance is the fact that the docker volume is empty (it was just created). Subsequent instances of PostgreSQL will find the schema+data you loaded previously.
I am working on elastic search and I want to create same index on local elastic search instance as it is created on production instance, with same type of mapping and settings,
one way I can think of is setting same mappings, is there any other better way of copying index metadata to local, thanks
Simply sending a GET request to https://source-es-ip:port/index_name/_mappings
and PUT the output to https://destination-es-ip:port/index_name
Copying the data can be achieved by the Elasticsearch Reindex API,
For reference you can see this link.
For example, To achieve this I would use this python Script-
from elasticsearch import Elasticsearch
from elasticsearch.helpers import reindex
import urllib3
urllib3.disable_warnings()
es_source = Elasticsearch(hosts=['ip:port'],<other params>)
es_target = Elasticsearch(hosts=['ip:port'],<other params>)
for index in es.indices.get('<index name/pattern>')
r = reindex(es_source, source_index=index, target_index=index, target_client=es_target, chunk_size=500)
print(r)
And this works across version even while copying the indexes across different versions of ES
I use a docker image for this , details - https://hub.docker.com/r/taskrabbit/elasticsearch-dump/
(the advantage of using docker image is, you don't need to install node and npm on your system, only having docker running is enough)
Once docker is installed and you have pulled the taskrabbit image, you can run the docker image to get a dump of the elasticsearch index on your remote server to local and vice versa using run commands :
sudo docker run --net=host --rm -ti taskrabbit/elasticsearch-dump --input=http://<remote-elastic>/testindex --output=http://<your-machine-ip>:9200/testindex --type=mapping
sudo docker run --net=host --rm -ti taskrabbit/elasticsearch-dump --input=http://<remote-elastic>/testindex --output=http://<your-machine-ip>:9200/testindex --type=data
to copy an index from your local elasticsearch to remote just reverse the input and output. The first command copies the mapping while the second dumps the data.
I follow the guideline .
Install composer-wallet-redis image and start the container.
export NODE_CONFIG={"composer":{"wallet":{"type":"#ampretia/composer-wallet-redis","desc":"Uses a local redis instance","options":{}}}}
composer card import admin#test-network.card
I found the card still store in my local machine at path ~/.composer/card/
How can I check whether the card exist in the redis server?
How to import the business network cards into the cloud custom wallet?
The primary issue (which I will correct in the README) is that the module name should the composer-wallet-redis The #ampretia was a temporary repo.
Assuming that redis is started on the default port, you can run the redis CLI like this
docker run -it --link composer-wallet-redis:redis --rm redis redis-cli -h redis -p 6379
You can then issue redis cli commands to look at the data. Though it is not recommended to view the data or modify it. Useful to confirm to yourself it's working. The KEYS * command will display everything but this should only be used in a development context. See the warnings on the redis docs pages.
export NODE_ENVIRONMENT
start the docker container
composer card import
execute docker run *** command followed by KEYS * ,empty list or set.
#Calanais
I have to pull docker image from Docker Hub and start running multiple peers as containers.
now, I am manually opening terminal and executing my docker run command on downloaded image but I am planning to automate this process like if I/user want 2 peers to run then I should be able to provide IP Address and Port information to Docker run command and start these peers in different terminals without manual step.
After executing these commands I should be able to store these IP address and port numbers in a JSON file further transactions.
Could you please help me!!! Thanks!!
Got quick solution for the above problem.. Below is the command I have applied docker run -d IMAGE NAME /bin/bash above command runs the container in background process. Also, I am taking network credentials by applying docker inspect <Container Id>