How to duplicate index in elastic search? - elasticsearch

I am working on elastic search and I want to create same index on local elastic search instance as it is created on production instance, with same type of mapping and settings,
one way I can think of is setting same mappings, is there any other better way of copying index metadata to local, thanks

Simply sending a GET request to https://source-es-ip:port/index_name/_mappings
and PUT the output to https://destination-es-ip:port/index_name
Copying the data can be achieved by the Elasticsearch Reindex API,
For reference you can see this link.
For example, To achieve this I would use this python Script-
from elasticsearch import Elasticsearch
from elasticsearch.helpers import reindex
import urllib3
urllib3.disable_warnings()
es_source = Elasticsearch(hosts=['ip:port'],<other params>)
es_target = Elasticsearch(hosts=['ip:port'],<other params>)
for index in es.indices.get('<index name/pattern>')
r = reindex(es_source, source_index=index, target_index=index, target_client=es_target, chunk_size=500)
print(r)
And this works across version even while copying the indexes across different versions of ES

I use a docker image for this , details - https://hub.docker.com/r/taskrabbit/elasticsearch-dump/
(the advantage of using docker image is, you don't need to install node and npm on your system, only having docker running is enough)
Once docker is installed and you have pulled the taskrabbit image, you can run the docker image to get a dump of the elasticsearch index on your remote server to local and vice versa using run commands :
sudo docker run --net=host --rm -ti taskrabbit/elasticsearch-dump --input=http://<remote-elastic>/testindex --output=http://<your-machine-ip>:9200/testindex --type=mapping
sudo docker run --net=host --rm -ti taskrabbit/elasticsearch-dump --input=http://<remote-elastic>/testindex --output=http://<your-machine-ip>:9200/testindex --type=data
to copy an index from your local elasticsearch to remote just reverse the input and output. The first command copies the mapping while the second dumps the data.

Related

How to persist Memgraph data to local hard drive?

I am running Memgraph on Windows 11 WSL using this command:
docker run -it -p 7687:7687 -p 3000:3000 -e MEMGRAPH="--bolt-port=7687" -v mg_lib:/mnt/c/temp/memgraph/lib -v mg_log:/mnt/c/temp/memgraph/log -v mg_etc:/mnt/c/temp/memgraph/etc memgraph
Then I created a node,
but I checked and those folders still empty.
How to persist Memgraph data to local hard drive?
Memgraph uses two mechanisms to ensure data durability:
write-ahead logs (WAL) and
periodic snapshots.
Snapshots are taken periodically during the entire runtime of Memgraph. When a snapshot is triggered, the whole data storage is written to the disk. Write-ahead logs save all database modifications that happened to a file. When running Memgraph with Docker, both of these mechanisms rely on the user to create volumes that will store this data when starting Memgraph.
There are two fields to specify for each volume.
The first is the name of the volume, and it's unique on a given host machine. In your case, that would be mg_lib, mg_log, and mg_etc.
The second field is the path where the file or directory is mounted in the container. In the case of Memgraph, that would be:
/var/lib/memgraph (this is where the durability related files are saved)
/var/log/memgraph (logs)
/etc/memgraph (configuration settings)
Given these paths, the command to run Memgraph with Docker is:
sudo docker run -it -p 7687:7687 -p 3000:3000 -v mg_lib:/var/lib/memgraph -v mg_log:/var/log/memgraph -v mg_etc:/etc/memgraph memgraph
By default, the volumes on the host machine can be found in:
\\wsl$\docker-desktop-data\version-pack-data\community\docker\volumes
I hope this answer can provide some clarity.

Using composer-wallet-redis , where to see the card in redis service?

I follow the guideline .
Install composer-wallet-redis image and start the container.
export NODE_CONFIG={"composer":{"wallet":{"type":"#ampretia/composer-wallet-redis","desc":"Uses a local redis instance","options":{}}}}
composer card import admin#test-network.card
I found the card still store in my local machine at path ~/.composer/card/
How can I check whether the card exist in the redis server?
How to import the business network cards into the cloud custom wallet?
The primary issue (which I will correct in the README) is that the module name should the composer-wallet-redis The #ampretia was a temporary repo.
Assuming that redis is started on the default port, you can run the redis CLI like this
docker run -it --link composer-wallet-redis:redis --rm redis redis-cli -h redis -p 6379
You can then issue redis cli commands to look at the data. Though it is not recommended to view the data or modify it. Useful to confirm to yourself it's working. The KEYS * command will display everything but this should only be used in a development context. See the warnings on the redis docs pages.
export NODE_ENVIRONMENT
start the docker container
composer card import
execute docker run *** command followed by KEYS * ,empty list or set.
#Calanais

Persist Elastic Search Data in Docker Container

I have a working ES docker container running that I run like so
docker run -p 80:9200 -p 9300:9300 --name es-loaded-with-data --privileged=true --restart=always es-loaded-with-data
I loaded up ES with a bunch of test data and wanted to save it in that state so I followed up with
docker commit containerid es-tester
docker save es-tester > es-tester.tar
then when I load it back in the data is all gone... what gives?
docker load < es-tester.tar
If you started from the official ES image, it is using a volume (https://github.com/docker-library/elasticsearch/blob/7d08b8e82fb8ca19745dab75ee32ba5a746ac999/2.1/Dockerfile#L41). Because of this, any data written to that volume will not be committed by Docker.
In order to backup the data, you need to copy the data out of the container: docker cp <container name>:/usr/share/elasticsearch/data <dest>

Join a RethinkDB Cluster using ReQL instead of command line argument

I'm using Docker, and by default runs the RethinkDB process with only the --bind all argument.
To join the cluster requires the use of the --join argument, or a configuration file. To do this with Docker would now require a new Docker image to be made for this purpose.
How can I join a cluster using ReQL directly (thus eliminating the need to create a new Docker image). I could simply connect to the lone instance, add a row to a system table (like server_status), and the instance would connect to the newly entered external instance.
I could repeat this process for each node in the cluster. And makes things simpler for when nodes come up and go down, otherwise I would have to restart each RethinkDB process.
In Docker, we can override the CMD which invokes RethinkDB process with a custom command for customization the executing RethinkDB process. Instead of simply call docker run rethinkdb, we can pass an the rethinkdb command for joining to the first node.
Example using the official RethinkDB docker
docker run --rm -it -p 9080:8080 rethinkdb
Then we can inspect its IP address, assume it's 172.17.0.2, we can start second one:
docker run --rm -it -p 9081:8080 rethinkdb rethinkdb --join 172.17.0.2:29015 --bind all
Visit RethinkDB dashboard and you should see two nodes now.

Elasctic Search is working at port 9200 but Kibana is not working

Hello I am starting work with kibana and elasticsearch. I am being able to run elasticsearch at port 9200 but kibana is not running at port 5601. The following two images are given for clarification
Kibana is not running and showing the page is not available
Kibana doesn't support space in the folder name. Your folder name is
GA Works
Remove the space between those two words kibana will then run without errors and you will be able to access at
http://localhost:5601
You can rename the folder with
GA_Works
Have you
a) Set the elasticsearch_url to point at your Elasticsearch instance in file kibana/config.yml?
b) Run ./bin/kibana (or bin\kibana.bat on windows) (after setting the above config)
?
If you tried all of the above and still it doesn't work make sure that the kibana process is running first. I found that /etc/init.d/kibana4_init doesn't start the process. If that is the case then try: opt/kibana/bin/kibana.
I also made kibana user:group owner of the folder/files.

Resources