MAC Elasticsearch snapshot location - elasticsearch

In MAC elasticsearch to build repository
PUT http://localhost:9400/_snapshot/my_backup
{
"type": "fs",
"settings": {
"location": "/Users/Edison/Elasticsearch/Repository"
}
}
My computer is MacOS
I don't understand , how setting my location path..
This is my error message:
{
"error": "RepositoryException[[my_backup] failed to create repository]; nested: CreationException[Guice creation errors:\n\n1) Error injecting constructor, org.elasticsearch.repositories.RepositoryException: [my_backup] location [/Users/Edison/Elasticsearch/Repository] doesn't match any of the locations specified by path.repo because this setting is empty\n at org.elasticsearch.repositories.fs.FsRepository.<init>(Unknown Source)\n while locating org.elasticsearch.repositories.fs.FsRepository\n while locating org.elasticsearch.repositories.Repository\n\n1 error]; nested: RepositoryException[[my_backup] location [/Users/Edison/Elasticsearch/Repository] doesn't match any of the locations specified by path.repo because this setting is empty]; ",
"status": 500
}

Elasticsearch nodes require a shared drive for each node to save to, this shared directory is what the location property is referring to.
The first task is to set up this shared storage, for example you could choose a straightforward NFS mount: https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nfs-mount-on-ubuntu-12-04 Once you have a mounted directory on each node, you can then register your backup location.
Alternatively you can use a Samba share, for which this seems to be a guide for: http://vichargrave.com/creating-elasticsearch-snapshots/

Error which I had faced,
"repository_exception","reason":"[my_backup] location [/tmp/my_backup] doesn't match any of the locations specified by path.repo because this setting is empty"}}}
Operating system Centos
[ec2-user#ip-10-33-207-201 config]$ curl -XPUT 'http://localhost:9200/_snapshot/my_backup' -d '{
> "type": "fs",
> "settings": {
> "location": "/tmp/my_backup",
> "compress": true
> }
> }'
{"acknowledged":true}[ec2-user#ip-10-33-207-201 config]$
Solution
You need's to add the repository path to elasticsearch.yml file
path.repo: ["/tmp/my_backup"]
Reference
https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-snapshots.html

Related

Chef::Exceptions::nginx didn't start when installing nginx-1.16.1 from source

I am trying to install nginx from source , My requirement is to install specific version of nginx i.e., 1.16.1 because of which i am downloading from source.
After running installNginx.rb , i see nginx.conf file got updated with default nginx configs , but nginx -v says command not found.
below is my configuration -
installNginx.rb
include_recipe 'nginx::source'
begin
t = resources(:template => 'nginx.conf')
t.source 'nginx.conf'
t.cookbook 'my_nginx'
rescue Chef::Exceptions::ResourceNotFound
Chef::Log.warn "Could not find template nginx.conf to modify"
end
service 'nginx' do
action :restart
end
attributes/Source.rb
node.default['nginx']['source']['version'] = '1.16.1'
node.default['nginx']['source']['url'] = 'http://nginx.org/download/nginx-1.16.1.tar.gz'
node.default['nginx']['source']['checksum'] = 'f11c2a6dd1d3515736f0324857957db2de98be862461b5a542a3ac6188dbe32b'
metadata.rb
depends 'nginx'
After analysing what I observed on cookbook logs is: The source version I gave is 1.16.1 but for some reason, the nginx::source recipe is pulling in 1.12.1 and nginx is not starting
"nginx": {
"version": "1.12.1",
"package_name": "nginx",
"port": "80",
"dir": "/etc/nginx",
"script_dir": "/usr/sbin",
"log_dir": "/var/log/nginx",
"log_dir_perm": "0750",
"binary": "/opt/nginx-1.12.1/sbin/nginx",
"default_root": "/var/www/nginx-default",
"ulimit": "1024",
"cleanup_runit": true,
"repo_source": "nginx",
"install_method": "package",
"user": "webadmin",
"upstart": {
"runlevels": "2345",
"respawn_limit": null,
"foreground": true
}
"init_style": "init",
"source": {
"version": "1.16.1",
"prefix": "/opt/nginx-1.12.1",
"conf_path": "/etc/nginx/nginx.conf",
"sbin_path": "/opt/nginx-1.12.1/sbin/nginx",
"default_configure_flags": [
"--prefix=/opt/nginx-1.12.1",
"--conf-path=/etc/nginx/nginx.conf",
"--sbin-path=/opt/nginx-1.12.1/sbin/nginx",
"--with-cc-opt=-Wno-error"
],
"url": "http://nginx.org/download/nginx-1.16.1.tar.gz",
"checksum": "f11c2a6dd1d3515736f0324857957db2de98be862461b5a542a3ac6188dbe32b",
"modules": [
"nginx::http_ssl_module",
"nginx::http_gzip_static_module"
],
INFO: remote_file[nginx source] created file /var/chef/runs/58bffee4-b5aa-4632-97cd-0eeacc4ebd4c/local-mode-cache/cache/nginx-1.16.1.tar.gz
INFO: remote_file[nginx source] updated file contents /var/chef/runs/58bffee4-b5aa-4632-97cd-0eeacc4ebd4c/local-mode-cache/cache/nginx-1.16.1.tar.gz
I am unable to figure out where the issue is, any help is appreciated.
The attributes file in the nginx cookbook refers to the default version in multiple places. For example, it uses the default version to define the directory where nginx is installed to as well as download URL for the nginx sources as
default['nginx']['source']['prefix'] = "/opt/nginx-#{node['nginx']['source']['version']}"
default['nginx']['source']['url'] = "http://nginx.org/download/nginx-#{node['nginx']['source']['version']}.tar.gz"
Thus, if you later update the version attribute in your own cookbook, the download URL will not automatically be updated with the new version since it has no reference to it anymore.
To resolve this, you have two options
You can manually set all related attributes in your cookbook. This is likely error-prone and may lead to inconsistencies as you have seen.
You can reload the default nginx attributes file after having set the overridden attributes. This can look like this in your attributes file:
override['nginx']['version'] = '1.16.1'
override['nginx']['source']['checksum'] = 'f11c2a6dd1d3515736f0324857957db2de98be862461b5a542a3ac6188dbe32b'
# Reload nginx::source attributes with our updated version
node.from_file(run_context.resolve_attribute('nginx', 'source'))
Note that the nginx cookbook maintains two nginx versions: node['nginx']['version'] and node['nginx']['source']['version'], with the latter value being set to the former value by default.
In your ohai output, you have only seen the node['nginx']['version'] attribute (which you have not overridden).
By overriding this attribute and reloading the attributes/source.rb file as shown about, things should be consistent again.

Error while connecting Logstash and Elasticsearch

I am very very new to ELK, I installed ELK version 5.6.12 on CentOS sever. Elasticsearch and Kibana works fine. But I cannot connect Logstash to Elastic search.
I have set environment variable as
export JAVA_HOME=/usr/local/jdk1.8.0_131
export PATH=/usr/local/jdk1.8.0_131/bin:$PATH
I run simple test :
bin/logstash -e 'input { stdin { } } output { elasticsearch { host => localhost:9200 protocol => "http" port => "9200" } }'
I get error :
WARNING: Could not find logstash.yml which is typically located in
$LS_HOME/config or /etc/logstash. You can specify the path using --
path.settings. Continuing using the defaults
Could not find log4j2 configuration at path
/etc/logstash/logstash.yml/log4j2.properties. Using default config which
logs errors to the console
Simple "slash" mentioned in official documentation of Logstash works like following :
$bin/logstash -e 'input { stdin { } } output { stdout {} }'
Hello
WARNING: Could not find logstash.yml which is typically located in
$LS_HOME/config or /etc/logstash. You can specify the path using --
path.settings. Continuing using the defaults Could not find log4j2
configuration at path /usr/share/logstash/config/log4j2.properties.
Using default config which logs errors to the console
The stdin plugin is now waiting for input: {
"#version" => "1",
"host" => "localhost",
"#timestamp" => 2018-11-01T04:44:58.648Z,
"message" => "Hello" }
What could be the problem?

new composer-wallet - jszip error

I am making a new composer-wallet with composer 0.19.0
All test passed fine - test based on composer-wallet-filesystem
I can successfully import business network cards to the new wallet and use them for transactions.
I am only one issue
$ composer card list
Error: Can't find end of central directory : is this a zip file ? If it is, see http://stuk.github.io/jszip/documentation/howto/read_zip.html
Command failed
I tryed to update jszip to the lastest version in composer-cli, but same problem
Here is the environment variable to configure the connection
export NODE_CONFIG='{
"composer": {
"wallet": {
"type": "composer-wallet-mongodb",
"desc": "Uses a local mongodb instance",
"options": {
"uri": "mongodb://localhost:27017/yourCollection",
"collectionName": "myWallet",
"options": {
}
}
}
}
}'
Any help is welcomed

Querying remote registry service on machine <IP Address> resulted in exception: Unable to change open service manager

My cluster Config file as follows
`
{
"name": "SampleCluster",
"clusterConfigurationVersion": "1.0.0",
"apiVersion": "01-2017",
"nodes":
[
{
"nodeName": "vm0",
"iPAddress": "here is my VPS ip",
"nodeTypeRef": "NodeType0",
"faultDomain": "fd:/dc1/r0",
"upgradeDomain": "UD0"
},
{
"nodeName": "vm1",
"iPAddress": "here is my another VPS ip",
"nodeTypeRef": "NodeType0",
"faultDomain": "fd:/dc1/r1",
"upgradeDomain": "UD1"
},
{
"nodeName": "vm2",
"iPAddress": "here is my another VPS ip",
"nodeTypeRef": "NodeType0",
"faultDomain": "fd:/dc1/r2",
"upgradeDomain": "UD2"
}
],
"properties": {
"reliabilityLevel": "Bronze",
"diagnosticsStore":
{
"metadata": "Please replace the diagnostics file share with an actual file share accessible from all cluster machines.",
"dataDeletionAgeInDays": "7",
"storeType": "FileShare",
"IsEncrypted": "false",
"connectionstring": "c:\\ProgramData\\SF\\DiagnosticsStore"
},
"nodeTypes": [
{
"name": "NodeType0",
"clientConnectionEndpointPort": "19000",
"clusterConnectionEndpointPort": "19001",
"leaseDriverEndpointPort": "19002",
"serviceConnectionEndpointPort": "19003",
"httpGatewayEndpointPort": "19080",
"reverseProxyEndpointPort": "19081",
"applicationPorts": {
"startPort": "20001",
"endPort": "20031"
},
"isPrimary": true
}
],
"fabricSettings": [
{
"name": "Setup",
"parameters": [
{
"name": "FabricDataRoot",
"value": "C:\\ProgramData\\SF"
},
{
"name": "FabricLogRoot",
"value": "C:\\ProgramData\\SF\\Log"
}
]
}
]
}
}
It is almost identical to standalone service fabric download demo file for untrusted cluster except my VPS ip. I enabled remote registry service.I ran the
\TestConfiguration.ps1 -ClusterConfigFilePath \ClusterConfig.Unsecure.MultiMachine.json but i got the following error.
Unable to change open service manager handle because 5
Unable to query service configuration because System.InvalidOperationException: Unable to change open service manager ha
ndle because 5
at System.Fabric.FabricDeployer.FabricDeployerServiceController.GetServiceStartupType(String machineName, String serv
iceName)
Querying remote registry service on machine <IP Address> resulted in exception: Unable to change open service manager
handle because 5.
Unable to change open service manager handle because 5
Unable to query service configuration because System.InvalidOperationException: Unable to change open service manager ha
ndle because 5
at System.Fabric.FabricDeployer.FabricDeployerServiceController.GetServiceStartupType(String machineName, String serv
iceName)
Querying remote registry service on machine <Another IP Address> resulted in exception: Unable to change open service manager
handle because 5.
Best Practices Analyzer determined environment has an issue. Please see additional BPA log output in DeploymentTraces
LocalAdminPrivilege : True
IsJsonValid : True
IsCabValid :
RequiredPortsOpen : True
RemoteRegistryAvailable : False
FirewallAvailable :
RpcCheckPassed :
NoConflictingInstallations :
FabricInstallable :
DataDrivesAvailable :
Passed : False
Test Config failed with exception: System.InvalidOperationException: Best Practices Analyzer determined environment has
an issue. Please see additional BPA log output in DeploymentTraces folder.
at System.Management.Automation.MshCommandRuntime.ThrowTerminatingError(ErrorRecord errorRecord)
I don't understand the problem.VPSs are not locally connected. All are public IP.I don't know, this may b an issue. how do I make virtual LAN among these VPS?Can anyone give me some direction about this error?Anyone helps me is greatly appreciated.
Edit: I used VM term insted of VPS.
Finally I make this working. Actually all the nodes are in a network, i thought it wasn't. I enable file sharing. I try to access the shared file from the node where I ran configuration test to the all other nodes. I have to give the credentials of logins. And then it works like a charm.

repository_missing_exception snapshot and restore in Elasticsearch

I have to transfer an Elasticsearch index on a Windows machine to an Ubuntu Machine. I decided to take a snapshot of the index and then try to restore it on the other system.
I was successfully able to snapshot the index on the windows machine.
On the windows machine in elasticsearch.yml I had path.repo: ["F:\\mount\\backups"].
So, under mount I had:
.
└── backups
└── old_backup
├── index
├── indices
│   └── old_index
│   ├── 0
│   ├── 1
│   ├── 2
│   ├── 3
│   ├── 4
│   └── meta-snapshot_to_ubuntu.dat
├── meta-snapshot_to_ubuntu.dat
└── snap-snapshot_to_ubuntu.dat
where snapshot_to_ubuntu is the name of the snapshot I made on Windows.
I placed this snapshot in ~/Documents/mount on the ubuntu machine and start an instance of ES 2.3.0 with path.repo: ["/home/animesh/Documents/mount/backups"] in elasticsearch.yml.
I run the following on the command line:
curl -XGET localhost:9200/_snapshot/old_backup/snapshot_to_ubuntu?pretty=1
and get
{
"error" : {
"root_cause" : [ {
"type" : "repository_missing_exception",
"reason" : "[old_backup] missing"
} ],
"type" : "repository_missing_exception",
"reason" : "[old_backup] missing"
},
"status" : 404
}
Where am I going wrong?
UPDATE:
I ran the following curl command:
curl -X POST http://localhost:9200/_snapshot/old_backup/snapshot_to_ubuntu/_restore
and I get:
{
"error": {
"root_cause": [
{
"type": "repository_missing_exception",
"reason": "[old_backup] missing"
}
],
"type": "repository_missing_exception",
"reason": "[old_backup] missing"
},
"status": 404
}
I had a similar issue and I would like to share with you how I figured it out.
I will write all steps, hope it may helps other people as well.
I had to transfer an Elasticsearch index on a GCP server to my Local Machine. I decided to take a snapshot of the index and then try to restore it on my Local machine.
I'm assuming you already have the snapshot/s
The steps are:
Create a directory on your local machine with the snapshot/s you want to restore
Navigate to elasticsearch.yml file. For example, on my local machine, you can find the file here: /usr/local/Cellar/elasticsearch/7.8.1/libexec/config/elasticsearch.yml
add the repository path: path.repo: [PATH_TO_BACKUP_DIR] on the elasticsearch.yml file. For example: path.repo: ["/mount/backups", "/mount/longterm_backups"]
save, exit, and restart elasticsearch
After all nodes are restarted, the following command can be used to register the shared file system repository with the name my_fs_backup
curl -X PUT "localhost:9200/_snapshot/my_fs_backup?pretty" -H 'Content-Type: application/json' -d'
{
"type": "fs",
"settings": {
"location": "PATH_TO_BACKUP_DIR", // Example: location" : "/usr/local/etc/elasticsearch/elastic-backup"
"compress": true
}
}'
Check your configuration: curl -X GET "localhost:9200/_snapshot/_all?pretty"
Restore from snapshot:
8.1 Get all snapshots: curl -X GET "localhost:9200/_snapshot/my_fs_backup/*?pretty
You will get this screen:
Pick the snapshot you want (In case you have more than one)
Use this command to restore:
curl -X POST "localhost:9200/_snapshot/BACKUP_NAME/SNAPSHOT_ID/_restore?pretty" -H 'Content-Type: application/json' -d'
{
"indices": "users, events",
"ignore_unavailable": true,
"include_global_state": true
}
For example:
curl -X POST "localhost:9200/_snapshot/my_fs_backup/elastic-snapshot-2020.09.05-lwul1zb9qaorq0k9vmd5rq/_restore?pretty" -H 'Content-Type: application/json' -d'
{
"indices": "users, events",
"ignore_unavailable": true,
"include_global_state": true
}
Pay attention that I imported only 2 indices users and events
Hope it helps 😃
More info and extended tutorials:
Elastic website, jee-appy blogspot
NOTE: This solution uses slightly different repository storage, but the behaviour is expected to be the same!
I know it's a zombie question, but I currently stumbled of this, while testing restore procedure of ElasticSnapshots with Azure Repository plugin.
I created a snapshot on our old PAAS Openstack and tried restoring on a fresh Azure Elastic cluster where I tested the connectivity of Azure repositories before. I still got the "repository location" in my case:
{
"type": "azure",
"settings": {
"container": "restore",
"chunk_size": "32MB",
"compress": true
}
}
But restoring always got the me the missing repository exception:
{
"error" : {
"root_cause" : [
{
"type" : "repository_missing_exception",
"reason" : "[restore] missing"
}
],
"type" : "repository_missing_exception",
"reason" : "[restore] missing"
},
"status" : 404
}
Turns out another branch got deployed on my test Azure k8s cluster which removed the Azure repository plugin and with it the connectivity to the repository. Even restoring the plugin did not help fixing the missing_repository_exception
Carefully re-reading the docs (https://www.elastic.co/guide/en/elasticsearch/reference/7.9/snapshots-register-repository.html) gave me this:
You can unregister a repository using the delete snapshot repository API.
When a repository is unregistered, Elasticsearch only removes the reference to the location where the repository is storing the snapshots. The snapshots themselves are left untouched and in place.
So what solved the missing_repository_exception in my case was doing a "bit scary":
DELETE /_snapshot/restore
and then recreating the snapshot location with:
PUT https://localhost:9200/_snapshot/restore --data '
{
"type": "azure",
"settings": {
"container": "restore",
"chunk_size": "32MB",
"compress": true
}
}'
Then the previously failing snapshot restore command succeded:
POST https://localhost:9200/_snapshot/restore/snapshot_2020810/_restore
{"accepted":true}
curl -XGET localhost:9200/_snapshot/old_backup/snapshot_to_ubuntu?pretty=1
That command creates snapshot. Because you didnt create a repository on ubuntu side, you get error.
What you want is to restore so you should use _restore endpoint:
POST /_snapshot/old_backup/snapshot_to_ubuntu/_restore
Check:https://www.elastic.co/guide/en/elasticsearch/reference/2.3/modules-snapshots.html#_restore

Resources