Unable to restore snapshot from URL repository with basic authentication - elasticsearch

Problem:
I'm trying to create snapshot on ES instance ES1, and restore to another instance ES2. It fails to restore when the remote URL requires basic authentication.
ES version: 6.5.4
Success and failure cases:
Success case: When the website exposing the ES1 repository folder is open to the internet (no authentication), the listing/restore works fine. For example when the repo URL is "http://snapshots.example-server.com"
Failure case: When I add basic authentication to the same URL, it doesn't work. For example when the repo URL is: "http://username:password#snapshots.example-server.com"
When I try to list snapshots in the URL repository with basic authentication, the error I get is:
{
"error": {
"root_cause": [
{
"type": "repository_exception",
"reason": "[remote-repo] could not read repository data from index blob"
}
],
"type": "repository_exception",
"reason": "[remote-repo] could not read repository data from index blob",
"caused_by": {
"type": "i_o_exception",
"reason": "Server returned HTTP response code: 401 for URL: http://username:password#snapshots.example-server.com/index.latest"
}
},
"status": 500
}
The Setup:
Setting up ES1:
Step 1: Modify config file:
path.repo: ["/path/to/es1_repo"]
Step 2: Creating the repo:
PUT /_snapshot/es1_repo
{
"type": "fs",
"settings": {
"location": "/path/to/es1_repo"
}
}
Step 3: Making the repository path accessible from internet:
I have setup an nginx server on the ES1 machine, to expose "/path/to/es1_repo” directory listing, let's say at: http://snapshots.example-server.com. It has basic authentication enabled. You could, for example, access the repo as http://username:password#snapshots.example-server.com and you would see the directory listing.
Step 4: Create the snapshot:
PUT /_snapshot/es1_repo/snapshot_1?wait_for_completion=true
{
"indices": "the_index_name",
"ignore_unavailable": true,
"include_global_state": false
}
Setting up ES2:
Step 5: Add to elastic config:
repositories.url.allowed_urls: "http://username:password#snapshots.example-server.com"
Step 6: Register repo
PUT _snapshot/remote-repo
{
"type": "url",
"settings": {
"url": "http://username:password#snapshots.example-server.com"
}
}
Step 7: Check if the snapshot is accessible:
GET _snapshot/remote-repo/_all
At this step the error pasted at the top appears. If I disable basic authentication, it works fine.
What could be the issue here?

You should make these:
Add to whitelist the repository in elasticsearch.yaml or docker environment
repositories.url.allowed_urls: "http://snapshots.example-server.com"
Create the snapshots repository
PUT _snapshot/remote-repo
{
"type": "url",
"settings": {
"url": "http://snapshots.example-server.com",
"client": "your_client",
}
}
Add username and password to the Elasticsearch keystore
bin/elasticsearch-keystore add url.client.your_client.username
bin/elasticsearch-keystore add url.client.your_client.password

Related

Not able to configure Elasticsearch snapshot repository using OCI Amazon S3 Compatibility API

My Elasticsearch7.8.0 is running in OCI OKE (Kubernetes running in Oracle Cloud). I want to setup Elasticsearch backup snapshot with OCI Object store using OCI Amazon S3 Compatibility API. Added repository-s3 plugin and configured ACCESS_KEY and SECRET_KEY in the PODs. While repository, I am getting "s_s_l_peer_unverified_exception"
PUT /_snapshot/s3-repository
{
"type": "s3",
"settings": {
"client": "default",
"region": "OCI_REGION",
"endpoint": "OCI_TENANCY.compat.objectstorage.OCI_REGION.oraclecloud.com",
"bucket": "es-backup"
}
}
Respose :
{
"error" : {
"root_cause" : [
{
"type" : "repository_verification_exception",
"reason" : "[s3-repository] path is not accessible on master node"
}
],
"type" : "repository_verification_exception",
"reason" : "[s3-repository] path is not accessible on master node",
"caused_by" : {
"type" : "i_o_exception",
"reason" : "Unable to upload object [tests-0J3NChNRT9WIQJknHAssKg/master.dat] using a single upload",
"caused_by" : {
"type" : "sdk_client_exception",
"reason" : "Unable to execute HTTP request: Certificate for <es-backup.OCI_TENANCY.compat.objectstorage.OCI_REGION.oraclecloud.com> doesn't match any of the subject alternative names: [swiftobjectstorage.us-ashburn-1.oraclecloud.com]",
"caused_by" : {
"type" : "s_s_l_peer_unverified_exception",
"reason" : "Certificate for <es-backup.OCI_TENANCY.compat.objectstorage.OCI_REGION.oraclecloud.com> doesn't match any of the subject alternative names: [swiftobjectstorage.us-ashburn-1.oraclecloud.com]"
}
}
}
},
"status" : 500
}
I hope you are aware of when to use S3 Compatible API.
"endpoint":"OCI_TENANCY.compat.objectstorage.OCI_REGION.oraclecloud.com"
Please modify OCI_TENANCY to TENANCY_NAMESPACE. Please refer to this link for more information.
You can find your tenancy namespace information in Administration -> Tenancy Details page.
Well you shouldn't be talking to es-backup.OCI_TENANCY.compat.objectstorage.OCI_REGION.oraclecloud.com where your bucket name is part of the domain. You can try it in your browser and you'll get a similar security warning about certs.
If you look at https://docs.cloud.oracle.com/en-us/iaas/Content/Object/Tasks/s3compatibleapi.htm#usingAPI you'll see a mention of:
The application must use path -based access. Virtual host-style access (accessing a bucket as bucketname.namespace.compat.objectstorage.region.oraclecloud.com) is not supported.
AWS is migrating from path based to sub-domain based URLs for S3 (https://aws.amazon.com/blogs/aws/amazon-s3-path-deprecation-plan-the-rest-of-the-story/) so the ES S3 plugin is probably defaulting to doing things the new AWS way.
Does it make a difference if you use an https:// URL for the endpoint value? Looking at my 6.8 config I have something like:
{
"s3-repository": {
"type": "s3",
"settings": {
"bucket": "es-backup",
"client": "default",
"endpoint": "https://{namespace}.compat.objectstorage.us-ashburn-1.oraclecloud.com/",
"region": "us-ashburn-1"
}
}
}
What I'm guessing is that having a full URL for the endpoint probably sets the protocol and path_style_access or 6.8 didn't require you to set path_style_access to true but 7.8 might. Either way, try a full URL or setting path_style_access to true. Relevant docs at https://www.elastic.co/guide/en/elasticsearch/plugins/master/repository-s3-client.html

How to migrate index from Old Server to new server of elasticsearch

I have one index in old elasticsearch server in 6.2.0 version (windows server) and now I am trying to move it to new server (Linux) on 7.6.2 version of elasticsearch. I tried below command to migrate my index from old to new server but it is throwing an exception.
POST _reindex
{
"source": {
"remote": {
"host": "http://MyOldDNSName:9200"
},
"index": "test"
},
"dest": {
"index": "test"
}
}
Exception I am getting is -
{
"error" : {
"root_cause" : [
{
"type" : "illegal_argument_exception",
"reason" : "[MyOldDNSName:9200] not whitelisted in reindex.remote.whitelist"
}
],
"type" : "illegal_argument_exception",
"reason" : "[MyOldDNSName:9200] not whitelisted in reindex.remote.whitelist"
},
"status" : 400
}
Note : I did not created any index in new elastic search server. do I have to create it with my old schema and then try to execute the above command ?
The error message is quite clear that your remote host(windows in your case) from which you are trying to build in a index on your new host(Linux) is not whitelisted, Please refer Elasticsearch guide on how to reindex from remote on more info.
As per same doc
Remote hosts have to be explicitly whitelisted in elasticsearch.yml
using the reindex.remote.whitelist property. It can be set to a
comma delimited list of allowed remote host and port combinations
(e.g. otherhost:9200, another:9200, 127.0.10.:9200, localhost:).
Another useful discuss link to troubleshoot the issue.
https://www.elastic.co/guide/en/elasticsearch/reference/8.0/docs-reindex.html#reindex-from-remote
Add this to elasticsearch.yml, modify it according your environment:
reindex.remote.whitelist: "otherhost:9200, another:9200, 127.0.10.*:9200, localhost:*"

How to Get Visual Studio to Publish an Application to Service Fabric Cluster Secured by Certificate Common Name Instead of Thumbprint?

I followed the steps documented here to convert my existing ARM template to use the commonname setting instead of thumbprint. The deployment was successful and I was able to connect to the Service Fabric Explorer using my browser after the typical certificate selection popup. Next, I tried to deploy an application to the cluster just like I had been previously. Even though I can see the cluster connection endpoint URI in the VS public service fabric application dialog, VS fails to connect to the cluster. Before, I would get a prompt to permit VS to access the local certificate. Does anyone know how to get VS to deploy an application to a service fabric cluster setup using the certificate common name?
Extracts from the MS link above:
"virtualMachineProfile": {
"extensionProfile": {
"extensions": [`enter code here`
{
"name": "[concat('ServiceFabricNodeVmExt','_vmNodeType0Name')]",
"properties": {
"type": "ServiceFabricNode",
"autoUpgradeMinorVersion": true,
"protectedSettings": {
"StorageAccountKey1": "[listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('supportLogStorageAccountName')),'2015-05-01-preview').key1]",
"StorageAccountKey2": "[listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('supportLogStorageAccountName')),'2015-05-01-preview').key2]"
},
"publisher": "Microsoft.Azure.ServiceFabric",
"settings": {
"clusterEndpoint": "[reference(parameters('clusterName')).clusterEndpoint]",
"nodeTypeRef": "[variables('vmNodeType0Name')]",
"dataPath": "D:\\SvcFab",
"durabilityLevel": "Bronze",
"enableParallelJobs": true,
"nicPrefixOverride": "[variables('subnet0Prefix')]",
"certificate": {
"commonNames": [
"[parameters('certificateCommonName')]"
],
"x509StoreName": "[parameters('certificateStoreValue')]"
}
},
"typeHandlerVersion": "1.0"
}
},
and
{
"apiVersion": "2018-02-01",
"type": "Microsoft.ServiceFabric/clusters",
"name": "[parameters('clusterName')]",
"location": "[parameters('clusterLocation')]",
"dependsOn": [
"[concat('Microsoft.Storage/storageAccounts/', variables('supportLogStorageAccountName'))]"
],
"properties": {
"addonFeatures": [
"DnsService",
"RepairManager"
],
"certificateCommonNames": {
"commonNames": [
{
"certificateCommonName": "[parameters('certificateCommonName')]",
"certificateIssuerThumbprint": ""
}
],
"x509StoreName": "[parameters('certificateStoreValue')]"
},
...
I found the solution for Visual Studio. I needed to add/update to the PublishProfiles/Cloud.xml file. I replaced ServerCertThumbprint with ServerCommonName, and then used the certificate CN for the new property and the existing FindValue property. Additionally, I changed the property for FindType to FindBySubjectName. I am now able to successfully connect and publish my application to the cluster.
<ClusterConnectionParameters
ConnectionEndpoint="sf-commonnametest-scus.southcentralus.cloudapp.azure.com:19000"
X509Credential="true"
ServerCommonName="sfrpe2eetest.southcentralus.cloudapp.azure.com"
FindType="FindBySubjectName"
FindValue="sfrpe2eetest.southcentralus.cloudapp.azure.com"
StoreLocation="CurrentUser"
StoreName="My" />

How to test analyzer-smartcn plugin for Elastic Search on local machine

I installed the smartcn plugin on my elastic search, restarted elasticsearch and tried to create an index with these settings:
PUT /test_chinese
{
"settings": {
"index": {
"analysis": {
"analyzer": {
"default": {
"type": "smartcn"
}
}
}
}
}
}
However, when I run this in Marvel, I get this error back and I see a bunch of errors in Elastic search:
"error": "IndexCreationException[[test_chinese] failed to create
index]; nested: ElasticsearchIllegalArgumentException[failed to find
analyzer type [smartcn] or tokenizer for [default]]; nested:
NoClassSettingsException[Failed to load class setting [type] with
value [smartcn]]; nested:
ClassNotFoundException[org.elasticsearch.index.analysis.smartcn.SmartcnAnalyzerProvider];
", "status": 400
Any ideas what I might be missing?
I figured it out. I manually installed the plugins from the zip and it was causing issues... I reinstalled the right way but specific to 1.7 and it worked

Cannot load search template registered with REST API

I have a problem when loading a search template which has been registrered through the REST API. If the search template is placed in the /config/scrips/ folder there is no problem.
The template has been registrered via POST to: /_search/template/templateName, and I can see that the template has been succesfully registered when I do a GET to: /_search/template/templateName.
However, when I try to send a request that utilizes this search template I get an error. If have tried the following endpoints:
POST: /_search/template (the one from the documentation)
POST: [index]/_search/template
POST: [index]/[type]/_search/template
With this body:
{
"template": {
"file": "templateName"
},
"params": {
"userId" : "AU43nSoTZOSzwq_2ZUA4",
etc...
}
}
But it keeps returning this error:
{
"error": "SearchPhaseExecutionException[Failed to execute phase [query_fetch], all shards failed; shardFailures {[UDBaJWKqQ5GpZedzLCtrFg][.scripts][0]: ElasticsearchIllegalArgumentException[Unable to find on disk script templateName]}]",
"status": 400
}
I cannot upload files to the ElasticSearch host that I'm using, so I need to register them via POST request. What am I missing?
Thanks in advance
The correct search request is like this:
GET /some_index/_search/template
{
"template": {
"id":"templateName"
},
"params": {
"userId" : "AU43nSoTZOSzwq_2ZUA4",
...
}
}

Resources