How to use document API with HTTPS Elasticsearch - elasticsearch

I'm having a rough time figuring out how can I use this Endpoint but with HTTPS:
PUT twitter/_doc/1
{
"user" : "kimchy",
"post_date" : "2009-11-15T14:12:12",
"message" : "trying out Elasticsearch"
}
So changing to https doesn't work I don't get any response at all. I'm using X-Pack. Also, I didn't configure any special user.
Any hint?

Related

Is there anyway to check if elasticsearch cluster exists or not?

I am working on an elasticsearch (es) cluster monitoring dashboard where I want to onboard all my es clusters. I am developing the dashboard from scratch. So, I wanted to add a button on the dashboard by clicking on that user will be able to enter the name of the es cluster address/IP(first time onboarding the cluster) then hit the submit button. If that es cluster exists then user should be able to monitor the cluster, if not then, it should show some error message to the user(on the dashboard) saying that "Sorry you have entered a wrong cluster address/IP". So, how can I determine if an es cluster exists or not?
A simple curl call to the ES cluster address and port should be enough to verify if an ES cluster exists or not.
For e.g. if we want to verify whether an ES cluster exists at http://localhost:9200, we would fire a curl call as follows:-
curl -XGET "http://localhost:9200/"
If the ES cluster exists/ has permissions to access, it would return a JSON as follows:
{
"name" : "es01",
"cluster_name" : "elasticsearch7",
"cluster_uuid" : "xu49eNE6SuC1Z857kG2Q5g",
"version" : {
"number" : "7.16.3",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "4e6e4eab2297e949ec994e688dad46290d018022",
"build_date" : "2022-01-06T23:43:02.825887787Z",
"build_snapshot" : false,
"lucene_version" : "8.10.1",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
Else, it would return an error as follows:-
curl: (7) Failed to connect to localhost port 9200: Connection refused
Please note, that you would need to use appropriate curl syntax according to the programming language. For the example, I have considered a bash script.

Not able to configure Elasticsearch snapshot repository using OCI Amazon S3 Compatibility API

My Elasticsearch7.8.0 is running in OCI OKE (Kubernetes running in Oracle Cloud). I want to setup Elasticsearch backup snapshot with OCI Object store using OCI Amazon S3 Compatibility API. Added repository-s3 plugin and configured ACCESS_KEY and SECRET_KEY in the PODs. While repository, I am getting "s_s_l_peer_unverified_exception"
PUT /_snapshot/s3-repository
{
"type": "s3",
"settings": {
"client": "default",
"region": "OCI_REGION",
"endpoint": "OCI_TENANCY.compat.objectstorage.OCI_REGION.oraclecloud.com",
"bucket": "es-backup"
}
}
Respose :
{
"error" : {
"root_cause" : [
{
"type" : "repository_verification_exception",
"reason" : "[s3-repository] path is not accessible on master node"
}
],
"type" : "repository_verification_exception",
"reason" : "[s3-repository] path is not accessible on master node",
"caused_by" : {
"type" : "i_o_exception",
"reason" : "Unable to upload object [tests-0J3NChNRT9WIQJknHAssKg/master.dat] using a single upload",
"caused_by" : {
"type" : "sdk_client_exception",
"reason" : "Unable to execute HTTP request: Certificate for <es-backup.OCI_TENANCY.compat.objectstorage.OCI_REGION.oraclecloud.com> doesn't match any of the subject alternative names: [swiftobjectstorage.us-ashburn-1.oraclecloud.com]",
"caused_by" : {
"type" : "s_s_l_peer_unverified_exception",
"reason" : "Certificate for <es-backup.OCI_TENANCY.compat.objectstorage.OCI_REGION.oraclecloud.com> doesn't match any of the subject alternative names: [swiftobjectstorage.us-ashburn-1.oraclecloud.com]"
}
}
}
},
"status" : 500
}
I hope you are aware of when to use S3 Compatible API.
"endpoint":"OCI_TENANCY.compat.objectstorage.OCI_REGION.oraclecloud.com"
Please modify OCI_TENANCY to TENANCY_NAMESPACE. Please refer to this link for more information.
You can find your tenancy namespace information in Administration -> Tenancy Details page.
Well you shouldn't be talking to es-backup.OCI_TENANCY.compat.objectstorage.OCI_REGION.oraclecloud.com where your bucket name is part of the domain. You can try it in your browser and you'll get a similar security warning about certs.
If you look at https://docs.cloud.oracle.com/en-us/iaas/Content/Object/Tasks/s3compatibleapi.htm#usingAPI you'll see a mention of:
The application must use path -based access. Virtual host-style access (accessing a bucket as bucketname.namespace.compat.objectstorage.region.oraclecloud.com) is not supported.
AWS is migrating from path based to sub-domain based URLs for S3 (https://aws.amazon.com/blogs/aws/amazon-s3-path-deprecation-plan-the-rest-of-the-story/) so the ES S3 plugin is probably defaulting to doing things the new AWS way.
Does it make a difference if you use an https:// URL for the endpoint value? Looking at my 6.8 config I have something like:
{
"s3-repository": {
"type": "s3",
"settings": {
"bucket": "es-backup",
"client": "default",
"endpoint": "https://{namespace}.compat.objectstorage.us-ashburn-1.oraclecloud.com/",
"region": "us-ashburn-1"
}
}
}
What I'm guessing is that having a full URL for the endpoint probably sets the protocol and path_style_access or 6.8 didn't require you to set path_style_access to true but 7.8 might. Either way, try a full URL or setting path_style_access to true. Relevant docs at https://www.elastic.co/guide/en/elasticsearch/plugins/master/repository-s3-client.html

Cant connect to my proxied elasticsearch node

I'm having issues with connecting from my Go client to my es node.
I have elasticsearch behind an nginx proxy that sets basic auth.
All settings are default in ES besides memory.
Via browser it works wonderfully, but not via this client:
https://github.com/olivere/elastic
I read the docs and it says it uses the /_nodes/http api to connect. Now this is probably where I did something wrong because the response from that api looks like this:
{
"_nodes" : {
"total" : 1,
"successful" : 1,
"failed" : 0
},
"cluster_name" : "elasticsearch",
"nodes" : {
"u6TqFjAvRBa3_4FndfKh4w" : {
"name" : "u6TqFjA",
"transport_address" : "127.0.0.1:9300",
"host" : "127.0.0.1",
"ip" : "127.0.0.1",
"version" : "5.6.2",
"build_hash" : "57e20f3",
"roles" : [
"master",
"data",
"ingest"
],
"http" : {
"bound_address" : [
"[::1]:9200",
"127.0.0.1:9200"
],
"publish_address" : "127.0.0.1:9200",
"max_content_length_in_bytes" : 104857600
}
}
}
}
I'm guessing I have to set the IPs to my actual IP/domain (my domain is like es01.somedomain.com)
So how do i correctly configure elastisearch so that my go client can connect?
My config files for nginx look similar to this: https://www.elastic.co/blog/playing-http-tricks-nginx
Edit: I found a temporary solution by setting elastic.SetSniff(false) in the Options for the client, but I think that means I can't scale ES horizontally. So still looking for an alternative.
You are looking for the HTTP options, specifically http.publish_host and http.publish_port, which should be set to the publicly reachable address and port of the Nginx server proxying the ES node.
Note that with Elasticsearch listening on 127.0.0.1:9300 for the transport, you won't be able to form a cluster with nodes on other hosts. The transport can be configured similarly with the transport options.

Elasticsearch GET request with request body

Isn't it against REST-style approach to pass a request body together with GET request?
For instance to filter some information in Elasticsearch
curl localhost:9200/megacorp/employee/_search -d '{"query" : {"filtered" : {"filter" : {"range" : {"age" : { "gt" : 30 }}},"query" : {"match" : {"last_name" : "smith"}}}}}'
some tools are even designed to avoid request body in GET request (like postman)
From the RFC:
A payload within a GET request message has no defined semantics; sending a payload body on a GET request might cause some existing implementations to reject the request.
In other words, it's not forbidden, but it's undefined behavior and should be avoided. HTTP clients, servers and proxies are free to drop the body and this would not go against the standard. It's absolutely a bad practice.
Further text from the HTTPBis working group (the group working on HTTP and related standards):
Finally, note that while HTTP allows GET requests to have a body syntactically, this is done only to allow parsers to be generic; as per RFC7231, Section 4.3.1, a body on a GET has no meaning, and will be either ignored or rejected by generic HTTP software.
source
No. It's not.
In REST, using POST to query does not make sense. POST is supposed to modify the server. When searching you obviously don't modify the server.
GET applies here very well.
For example, what would be the difference of running a search with:
GET /_search?q=foo
vs
GET /_search
{
"query": {
"query_string": {
"query" : "foo"
}
}
}
In both cases, you'd like to "GET" back some results. You don't mean to change any state on the server side.
That's why I think GET is totally applicable here wether you are passing the query within the URI or using a body.
That being said, we are aware that some languages and tools don't allow that. Although the RFC does not mention that you can't have a body with GET.
So elasticsearch supports also POST.
This:
curl -XPOST localhost:9200/megacorp/employee/_search -d '{"query" : {"filtered" : {"filter" : {"range" : {"age" : { "gt" : 30 }}},"query" : {"match" : {"last_name" : "smith"}}}}}'
Will work the same way.
You can use query parameter in an ElasticSearch GET request:
just add source=query_string_body&source_content_type='application/json'
The url will look like the following:
http://localhost:9200/index/_search/?source_content_type=application/json&source={"query":{"match_all":{}}}
ref:
https://discuss.elastic.co/t/query-elasticsearch-from-browser-webservice/129697

Elasticsearch ActiveMQ River Configuration

I start to configure an ActiveMQ river, I'm already installed the (ActiveMQ plugin) but I feel confused about how to make it working, the documentation was so brief, Actually, I follow exactly the steps of creating a new river but I don't know what are the following steps to follow?
Note:
I have the an ActiveMQ server up and running and I tested it using a
simple JMS app to push a message into a queue.
I created a new river using:
curl -XPUT 'localhost:9200/_river/myindex_river/_meta' -d '{
"type" : "activemq",
"activemq" : {
"user" : "guest",
"pass" : "guest",
"brokerUrl" : "failover://tcp://localhost:61616",
"sourceType" : "queue",
"sourceName" : "elasticsearch",
"consumerName" : "activemq_elasticsearch_river_myindex_river",
"durable" : false,
"filter" : ""
},
"index" : {
"bulk_size" : 100,
"bulk_timeout" : "10ms"
}
}'
After creating the previous river, I could get it's status using
curl -XGET 'localhost:9200/my_index/_status', it give me the index
status, not the created river.
Please, any help to get me the right road with ActiveMQ river configuration with the elasticsearch.
I told you on the mailing list. Define index.index value or set the name of your river to be your index name (easier):
curl -XPUT 'localhost:9200/_river/my_index/_meta' -d '
{
"type":"activemq",
"activemq":{
"user":"guest",
"pass":"guest",
"brokerUrl":"failover://tcp://localhost:61616",
"sourceType":"queue",
"sourceName":"elasticsearch",
"consumerName":"activemq_elasticsearch_river_myindex_river",
"durable":false,
"filter":""
},
"index":{
"bulk_size":100,
"bulk_timeout":"10ms"
}
}'
or
curl -XPUT 'localhost:9200/_river/myindex_river/_meta' -d '
{
"type":"activemq",
"activemq":{
"user":"guest",
"pass":"guest",
"brokerUrl":"failover://tcp://localhost:61616",
"sourceType":"queue",
"sourceName":"elasticsearch",
"consumerName":"activemq_elasticsearch_river_myindex_river",
"durable":false,
"filter":""
},
"index":{
"index":"my_index",
"bulk_size":100,
"bulk_timeout":"10ms"
}
}'
It should help.
If not, update your question with what you can see in logs.

Resources