Changing elasticsearch FS repository path - elasticsearch

I'm using elastiSearch 8.3 on a windows machine, and created a shared file system repository - my_repo:
PUT _snapshot/my_repo
{
"type": "fs",
"settings": {
"location": "relative_path"
}
}
The base path for this repository is set in the .yml file:
path.repo: \\path1\path1
If I'm changing the path in the .yml file (to e.g. \path2\path2), restart elastic and trying to recreate/update the my_repo:
PUT _snapshot/my_repo
{
"type": "fs",
"settings": {
"location": "relative_path"
}
}
I'm getting an error:
"[my_repo] Could not read repository data because the contents of the
repository do not match its expected state. This is likely the result
of either concurrently modifying the contents of the repository by a
process other than this cluster or an issue with the repository's
underlying storage. The repository has been disabled to prevent
corrupting its contents. To re-enable it and continue using it please
remove the repository from the cluster and add it again to make the
cluster recover the known state of the repository from its physical
contents."
Only if I'm changing back to the previuos path (\path1\path1) and set my_repo to readonly = true
I can than change the path (\path2\path2) and update the repository.
I know that only one cluster can register the repository for write, but here I have only one cluster...

Related

Requesting specific tagged version for locally developed composer package

I am developing a package for a Laravel project on my local machine. I have also spun up a Laravel app so I can manually test the package. My package is located at /home/me/packages/me/my-package and a commit (git) has been tagged with '0.1'.
I want to be able to switch between tagged versions and use specific versions in different projects but having issues.
In my main apps composer file, I am requiring the package like so:
...
"require" : {
"me/my-package" : "0.1"
}
...
"repositories" : [
{
"type": "path",
"url": "/home/me/packages/me/my-package"
}
]
This results in an error:
Problem 1
- Root composer.json requires me/my-package 0.1, found me/my-package[dev-main] but it does not match the constraint.
I have also tried:
"require" : {
"me/my-package" : "dev-main#0.1"
}
(This was an idea taken from How to use a specific tag/version with composer and a private git repository?). This goes through without any errors but:
$ composer show | grep me/my-package
me/my-package dev-main My Package
What is the correct way install a specific version of a package when developing it locally?
Probably the only thing why you hit this message is that you have "type": "path" and not "type": "vcs".
This is that Composer will only refer one version and only one version dev-main. The reason is:
If the package [path repository] is a local VCS repository, the version may be inferred by the branch or tag that is currently checked out. (ref)
You have the main branch checked out at /home/me/packages/me/my-package (/home/me/packages/me/my-package/.git/HEAD content is ref: refs/heads/main and /home/me/packages/me/my-package/.git/refs/heads/main points to the git revision) and composer will only take that one.
You should have no problem to make that change from path to vcs given:
You already have a (git) repository at /home/me/packages/me/my-package (looks so by your question)
You know the absolute path on your local system to that repository (again, looks so by your question: /home/me/packages/me/my-package).
Given these two points, Composer is able to obtain the VCS tagged versions from that path. So basically only the change of the "type":
"repositories" : [
{
"type": "vcs",
"url": "/home/me/packages/me/my-package"
}
]
Just take care that "url" contains the absolute path (and there is a git repository at that place). Likely already all set in your case, just saying.
Git is very prominent that's why I mentioned it here, for other types of VCS Composer also has options at hand. The details - also for git etc. - are available here:
VCS - Repositories (getcomposer.org)

How to setup elasticsearch snapshot repository for multinode?

I setup a snapshot repository for one node ES cluster running inside a container.
version 7.7.1
path.repo=[/usr/share/elasticsearch/data/snapshot]
PUT /_snapshot/my_backup
{
"type": "fs",
"settings": {
"location": "/usr/share/elasticsearch/data/snapshot"
}
}
It works well.
However in a multinode cluster it fails with RepositoryVerificationException.
How should I change the above code to be able to use it?
I come across these sources but both of them unclear about what to do exactly:
https://www.elastic.co/guide/en/elasticsearch/reference/current/snapshots-register-repository.html
https://discuss.elastic.co/t/snapshot-and-path-repo-on-one-cluster-node/155717

Elastic search snapshot restore another cluster

How to restore elastic search snapshot another cluster? without repository-s3, repository-hdfs, repository-azure, repository-gcs.
This answer is wrt Elastic Search 7.14. So, it is possible to host a snapshot repository on a NFS. Since, you would like to restore snapshot of one cluster to another, you would need to meet the following pre-requisites:
The NFS should be accessible from both source and destination cluster.
The version of the source and destination cluster should be the same. At most, the destination cluster can be 1 major version higher than the source cluster. Eg: you can restore a 5.x snaphot. to a 6.x cluster, but not a 7.x cluster.
Ensure that the shared NFS directory is owned by uid:gid = 1000:0 (elasticsearch user), and appropriate permissions are given (chmod -R 777 <appropriate NFS directory> as elasticsearch user)
Now, I am detailing the steps that you could take to copy the data.
Create a registry of type fs on the source cluster:
PUT http://10.29.61.189:9200/_snapshot/registry1
{
"type": "fs",
"settings": {
"location": "/usr/share/elasticsearch/snapshotrepo",
"compress": true
}
}
Take a snapshot on the created registry:
PUT http://10.29.61.189:9200/_snapshot/registry1/snapshot_1?wait_for_completion=true
{
"indices": "employee,manager",
"ignore_unavailable": true,
"include_global_state": false,
"metadata": {
"taken_by": "binita",
"taken_because": "test snapshot restore"
}
}
Create a registry of type url on the destination cluster. Type url will ensure that the same registry (in terms of the shared NFS path) will be read-only wrt the destination cluster. Destination cluster can only restore/read snapshot info, but can not write snapshots.
PUT http://10.29.59.165:9200/_snapshot/registry1
{
"type": "url",
"settings": {
"url": "file:/usr/share/elasticsearch/snapshotrepo"
}
}
Restore the snapshot generated from source cluster (in step no 2) to the destination cluster.
POST http://10.29.59.165:9200/_snapshot/registry1/snapshot_1/_restore
For more info, refer : https://www.elastic.co/guide/en/elasticsearch/reference/current/snapshots-restore-snapshot.html
Finally i found the solution.it work fine.please read carefully and do.
if you have question contact me waruna94kithruwan#gmail.com.
I have two elastic search cluster.i want a migrate elastic_01 data to elastic_02.
i mean elastic_01 snapshot restore to elastic_02.let's go.
Importent
verify elastic_01 and elastic_02 has this folder "/home/snapshot/".
if not exist create this folder first.
set correct permission to this folder.
please verify elastic_01 and elatic_02 versions same or match.
[elasticsearch snapshot documentation]: https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-snapshots.html
(01) set elastic_01 snapshot settings
$ curl -XPUT '/_snapshot/first_backup' -H 'Content-Type: application/json' -d '{
"type": "fs",
"settings": {
"location": "/home/snapshot/",
"compress": true
}
}'
(2) add snapshot location to elasticsearch.yml (elastic_01)
edit elasticsearch.yml file and add this code line and save.
$ path.repo: ["/home/snapshot/"]
(03) create snapshot (elastic_01)
$ curl -XPUT "/_snapshot/first_backup/snapshot_1?wait_for_completion=true"
(04) set elastic_02 snapshot settings
$ curl -XPUT '/_snapshot/first_backup' -H 'Content-Type: application/json' -d '{
"type": "fs",
"settings": {
"location": "/home/snapshot/",
"compress": true
}
}'
(05) add snapshot location to elasticsearch.yml (elastic_02)
edit elasticsearch.yml file and add this code line and save.
$ path.repo: ["/home/snapshot/"]
(06) create snapshot (elastic_02)
$ curl -XPUT "/_snapshot/first_backup/snapshot_1?wait_for_completion=true"
(07) copy elastic_01 snapshot to >>>> elastic_02
delete elastic_02 snapshot folder content $ rm -rf /home/snapshot/*
copy elastic_01 snapshot folder content to elastic_02 snapshot folder
(08) list snapshot
$ curl -XGET '/_snapshot/first_backup/_all?pretty'
it will show backup indexes and snapshot related data
(09) restore elastic search snapshot
$ curl -XPOST "/_snapshot/first_backup/snapshot_1/_restore?wait_for_completion=true"
NOTE: We need to make parameter "include_global_state" to "true" to restore the template as per link " https://www.elastic.co/guide/en/elasticsearch/client/curator/current/option_include_gs.html"
curl -X POST "localhost:9200/_snapshot/my_backup/snapshot_1/_restore?pretty" -H 'Content-Type: application/json' -d'
{
"include_global_state": true
}
'
{
"accepted" : true
}
Your idea is to first create a snapshot on nodeB, then delete its data and overwrite nodeA's data to this location?
But according to elastic's documentation, nodeB should mount the NFS directory in a read-only manner, so it does not have write permissions, such as using the type: url repository.
PUT _snapshot/local
{
"type": "url",
"settings": {
"url": "file:/home/esdata/snapshot"
}
}

Composer custom repositories local AND remote

We are using custom repositories that are not held on Packagist, and thus need to use composer's "repositories" key:
{
"type": "vcs",
"url": "https://github.com/name/repo"
},
However we also want to develop these locally before pushing them to GitHub
{
"type": "vcs",
"url": "/path/to/repo"
},
{
"type": "vcs",
"url": "https://github.com/name/repo"
}
However if a new user downloads the repo and just wants to use from GitHub (maybe they won't be developing locally) they get a big red error:
[InvalidArgumentException]
No driver found to handle VCS repository /path/to/dir
Is there a way that composer can tolerate this and just move down to the next line where it will find the repo?
That this is possible right now, as far as I know. The defined "/path/to/dir" needs to exist, it needs to be a repo and the repo needs to contain a composer.json file, otherwise Composer will fail.
Sounds like a valid point for a PR to ignore an invalid repository definition but not sure what Jordie thinks of this ;)
As an alternative: You could set-up your own Satis repo and pull the package from there.

MongoDB How to find out data directory using Java driver

I am using an instance of MongoDB with just one node. I would like to write a web service that fsyncs the data files and zips them into a backup folder.
Ideally, I would get the location of the data directory programatically (rather than reading a config file) so I can easily port this from a development to a production machine, where the installation paths differ. Is there any way to do this using the Java driver?
Try using use admin
db.runCommand({getCmdLineOpts: 1}) as outlined here and then playing with the returned data.
Example return data is
{
"argv" : [
"mongod",
"--port",
"6669",
"--dbpath=c:\\data\\mongo2",
"--rest"
],
"parsed" : {
"dbpath" : "c:\\data\\mongo2",
"port" : 6669,
"rest" : true
},
"ok" : 1
}
You could use mongoexport to get the data; run it from the production machine and specify the host/port/collection of the development machine. The data can be imported to the production machine using mongoimport.

Resources