How to use High Level Rest Client in Spring Data ES 3.2.0.M1 - elasticsearch

Spring Data ES 3.2.0.M1 still uses old TransportClient instead of HighLevelRestClient
Spring Data ES 3.2.0.M1 supports High Level Rest Client, see Add support for Java High Level REST Client. I've added Spring Data ES 3.2.0.M1 to the SB2 app:
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-elasticsearch</artifactId>
<version>3.2.0.M1</version>
</dependency>
However, still TransportClient is used. There are two indication of that: exceptions on start-up:
o.e.transport.netty4.Netty4Transport : exception caught on transport layer [NettyTcpChannel{localAddress=/127.0.0.1:61171, remoteAddress=localhost/127.0.0.1:8085}], closing connection
io.netty.handler.codec.DecoderException: java.io.StreamCorruptedException: invalid internal transport message format, got (48,54,54,50)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:472) ~[netty-codec-4.1.33.Final.jar:4.1.33.Final]
and exception stacktrace when calling ElasticsearchTemplate:
at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:349)
at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:247)
at org.elasticsearch.client.transport.TransportProxyClient.execute(TransportProxyClient.java:60)
at org.elasticsearch.client.transport.TransportClient.doExecute(TransportClient.java:382)
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:395)
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:384)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:46)
at org.springframework.data.elasticsearch.core.ElasticsearchTemplate.getSearchResponse(ElasticsearchTemplate.java:947)
Are there any config param to tell Spring Data ES to switch to new High Level Rest Client? The docs say nothing about it.
P.S. Spring Data ES 3.2.0.M1 has 6.4.3 ES client version:
Caused by: java.io.StreamCorruptedException: invalid internal transport message format, got (48,54,54,50)
at org.elasticsearch.transport.TcpTransport.validateMessageHeader(TcpTransport.java:1327) ~[elasticsearch-6.4.3.jar:6.4.3]
at org.elasticsearch.transport.netty4.Netty4SizeHeaderFrameDecoder.decode(Netty4SizeHeaderFrameDecoder.java:36) ~[transport-netty4-client-6.4.3.jar:6.4.3]
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:502) ~[netty-codec-4.1.33.Final.jar:4.1.33.Final]
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:441) ~[netty-codec-4.1.33.Final.jar:4.1.33.Final]
... 20 common frames omitted
back-end runs 6.4.2 version:
bash-4.4$ curl http://127.0.0.1:8085
{
"name" : "NA17WWR",
"cluster_name" : "494164851665",
"cluster_uuid" : "7t3LoK7PRp-ur6FyxSmHwQ",
"version" : {
"number" : "6.4.2",
"build_flavor" : "oss",
"build_type" : "zip",
"build_hash" : "04711c2",
"build_date" : "2018-10-16T09:16:35.059415Z",
"build_snapshot" : false,
"lucene_version" : "7.4.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}

As mentioned in the issue you're referring to, the high level REST client is available in ElasticsearchRestTemplate (see PR #216) not in ElasticsearchTemplate, which they'll keep until ES 7 for backward compatibility reasons.
You can create one with the configuration below:
<bean name="elasticsearchTemplate"
class="org.springframework.data.elasticsearch.core.ElasticsearchRestTemplate">
<constructor-arg name="client" ref="restClient"/>
</bean>
<elasticsearch:rest-client id="restClient"/>

Related

OpenSearch docker instance only allowing HTTPS connections

I'm trying to get OpenSearch configured on my local machine, and am deploying it through docker-compose using the following configuration:
opensearch:
image: opensearchproject/opensearch:1.0.0
restart: unless-stopped
ports:
- "9200:9200"
- "9300:9300"
environment:
discovery.type: single-node
The instance starts successfully, however when trying to access it through the web interface, it only accepts HTTPS connections with the default basic auth credentials (admin:admin). i.e.
https://localhost:9200 asks me to enter administrator credentials, and upon doing so, returns an expected response:
{
"name" : "a39dcf825899",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "d2ZBZDQRTyG6SvYlCmX3Iw",
"version" : {
"distribution" : "opensearch",
"number" : "1.0.0",
"build_type" : "tar",
"build_hash" : "34550c5b17124ddc59458ef774f6b43a086522e3",
"build_date" : "2021-07-02T23:22:21.383695Z",
"build_snapshot" : false,
"lucene_version" : "8.8.2",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "The OpenSearch Project: https://opensearch.org/"
}
However when attempting to connect to the instance over HTTP, I get an empty response:
On chrome:
Using the OpenSearch Python client on a Django instance running in a separate Docker container (part of the same docker-compose.yml):
opensearchpy.exceptions.ConnectionError: ConnectionError(('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))) caused by: ProtocolError(('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')))
For reference, the code I am using to connect the OpenSearch Python client to the OpenSearch instance is:
cls._os_client = OpenSearch(
[{"host": 'opensearch', "port": '9200'}],
use_ssl=False,
verify_certs=False,
ssl_assert_hostname=False,
ssl_show_warn=False
)
How can I configure OpenSearch to allow insecure HTTP connections?
You can disable security, just add DISABLE_SECURITY_PLUGIN=true to your env.

Elasticsearch GET / is returning HTML error instead of JSON response

I have recently installed Elasticsearch on RHEL and set the node name in the configuration file. Later, I started the service using the sudo systemctl start elasticsearch.service The service seems to be running as per the status command
sudo systemctl status elasticsearch.service
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2020-11-25 14:59:13 CET; 2h 37min ago
Docs: https://www.elastic.co
Main PID: 6565 (java)
CGroup: /system.slice/elasticsearch.service
├─6565 /usr/share/elasticsearch/jdk/bin/java -Xshare:auto -Des.networkaddress.cache.ttl=60 -Des.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=t...
└─6754 /usr/share/elasticsearch/modules/x-pack-ml/platform/linux-x86_64/bin/controller
Nov 25 14:59:05 hdm18 systemd[1]: Starting Elasticsearch...
Nov 25 14:59:13 hdm18 systemd[1]: Started Elasticsearch.
But the output of GET is returning an HTML page instead of JSON message
curl -X GET "localhost:9200/?pretty"
<!-- IE friendly error message walkround.
if error message from server is less than
512 bytes IE v5+ will use its own error
message instead of the one returned by
server. -->
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN">
<html>
<head>
<meta
enter code here
Any idea what am I doing wrong?
I believe you copied your command from this official link, but if you copy the curl command it will be like curl -X GET "localhost:9200/?pretty".
And above will print below correct Output.
{
"name" : "Opster",
"cluster_name" : "es_710",
"cluster_uuid" : "SZ-nvW_KSOaudmfB6e0oFg",
"version" : {
"number" : "7.10.0",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "51e9d6f22758d0374a0f3f5c6e8f3a7997850f96",
"build_date" : "2020-11-09T21:30:33.964949Z",
"build_snapshot" : false,
"lucene_version" : "8.7.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
This issue is solved by unsetting the http and https proxy by using
unset http_proxy
unset https_proxy
You're using Kibana dev console syntax when it looks like you're trying to use curl. check the curl syntax when looking at tutorials.
Probably the easiest for you is to use the dev console in kibana.

Elastic Search No query registered for [geo_bounding_box]

I am trying to fetch records from Elastic search and i get this error as below
ElasticsearchStatusException[Elasticsearch exception [type=exception, reason=SearchPhaseExecutionException[Failed to execute phase [query_fetch], all shards failed;
shardFailures {[-kDbP0fmTUa5B8v1gpgoZQ][dataintelindex_ra][0]: SearchParseException[[dataintelindex_ra]
[0]: from[-1],size[-1]: Parse Failure [Failed to parse source [{"query":{"geo_bounding_box":{"loc":
{"top_left":[-74.1,40.73],"bottom_right":
[-73.99,40.717]},"validation_method":"STRICT","type":"MEMORY","ignore_unmapped":false,"boost":1.0}}}]]];
nested: QueryParsingException[[dataintelindex_ra] No query registered for [geo_bounding_box]]; }]]]
My Java Code is as below
SearchRequest searchRequest = new SearchRequest("dataintelindex_ra").types("station_info");
SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
searchSourceBuilder.query(QueryBuilders.geoBoundingBoxQuery("loc").setCorners(40.73,-74.1,40.717,-73.99));
searchRequest.source(searchSourceBuilder);
SearchResponse response = elasticSearchClient.search(searchRequest, RequestOptions.DEFAULT);
for (SearchHit searchHit : response.getHits().getHits()) {
System.out.println("~~~~~~~~SearchHit[] searchHits~~~~~~~~~~~~~~ "+searchHit.getSourceAsString());
}
Please let me know if i missed something while trying to index , i am new to Elastic search.
Also incase if i want to include one more criteria to my query like below
searchSourceBuilder.query(QueryBuilders.termsQuery("zoneType", ["test","oms"]));
Below is the result for the above query and it works fine
~~~~~~~~SearchHit[] searchHits~~~~~~~~~~~~~~ {"tag_datatype":"sensor","loc":[{"lat":"0","lon":"0"}],"level":1,"kml_path":"","created":"Mon Aug 10 16:02:51 IST 2020","latitude":"0","station_id":"5f312253b4c93c1d20bbbb39","longtitude":"0","tag_owner":"","description":"","zoneType":"oms","tag_network_name":"chak_network","display_name":"506020200236117-O1","supply_zone":"506020200236117-O1","outflow":null,"tag_sector":"dmameter","name":"506020200236117-O1","tag_category":"sensorstation","inflow":null,"_id":"5f312253b4c93c1d20bbbb39","tag_location":"NA","lastmod":"Mon Aug 10 16:02:51 IST 2020","status":"ACTIVE"}
~~~~~~~~SearchHit[] searchHits~~~~~~~~~~~~~~ {"tag_datatype":"sensor","loc":[{"lat":"0","lon":"0"}],"level":1,"kml_path":"","created":"Tue Aug 11 11:36:51 IST 2020","latitude":"0","station_id":"5f32357b3ccb8f51e003587e","longtitude":"0","tag_owner":"","description":"","zoneType":"village","display_name":"testvillage1","supply_zone":"testvillage1","outflow":null,"tag_sector":"dmameter","name":"testvillage1","tag_category":"sensorstation","inflow":null,"_id":"5f32357b3ccb8f51e003587e","tag_location":"NA","lastmod":"Tue Aug 11 11:36:51 IST 2020","status":"ACTIVE"}
how do i combine with above geoboundingbox query ? do i need to add it as a filter?
Update : Dependencies
<dependency>
<groupId>org.elasticsearch.client</groupId>
<artifactId>transport</artifactId>
<version>6.4.0</version>
</dependency>
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch</artifactId>
<version>6.4.0</version>
</dependency>
<dependency>
<groupId>org.elasticsearch.client</groupId>
<artifactId>elasticsearch-rest-client</artifactId>
<version>6.4.0</version>
</dependency>
<dependency>
<groupId>org.elasticsearch.client</groupId>
<artifactId>elasticsearch-rest-high-level-client</artifactId>
<version>6.4.0</version>
</dependency>
{
"status" : 200,
"name" : "test1",
"version" : {
"number" : "1.2.2",
"build_hash" : "243243432feaga",
"build_timestamp" : "2014-07-09T12:02:32Z",
"build_snapshot" : false,
"lucene_version" : "4.8"
},
"tagline" : "You Know, for Search"
}
Thanks in advance
Rakesh
The problem is that you're running ES server v1.2.2 (extremely old version) with a 6.4.0 client.
So the 6.4.0 client has the geoBoundingBoxQuery() method, however the 1.2.2 client provides the geoBoundingBoxFilter() method, both are incompatible. There has been a big query/filter refactor in ES 2.x.
As a rule of thumb you should always run the same version of ES and the client library. In your case, you have a delta of several versions betweem your server and your client.
You should definitely consider upgrading your ES cluster to at least 6.4.0 or downgrade your client to 1.x.

Apache Superset oauth2 with custom Spring-Security OAuth2 server

I am using Apache Superset and trying to configure its OAuth2 capability to connect to my (custom) Spring-Security OAuth2 server. Unfortunately, it ain't working right now. The stack track begins with this.
15:09:16.584 [qtp1885996206-21] ERROR org.springframework.boot.web.support.ErrorPageFilter - Forwarding to error page from request [/oauth/authorize] due to exception [Could not resolve view with name 'forward:/oauth/confirm_access' in servlet with name 'dispatcherServlet'] javax.servlet.ServletException: Could not resolve view with name 'forward:/oauth/confirm_access' in servlet with name 'dispatcherServlet' at org.springframework.web.servlet.DispatcherServlet.render(DispatcherServlet.java:1262) ~[spring-webmvc-4.3.8.RELEASE.jar:4.3.8.RELEASE] at
Here is the relevant portion of my config.py from Superset.
AUTH_TYPE = AUTH_OAUTH
OAUTH_PROVIDERS = [
{
"name" : "MY-OAUTH",
"icon" : APP_ICON,
"token_key" : "password",
"remote_app" : {
"consumer_key" : "my_dashboard",
"consumer_secret" : "my_secret",
"base_url" : "http://localhost:8088/myoauth",
"request_token_params" : {
"scope": "my_dashboard read write",
"grant_type" : "password"
},
"request_token_url" : None,
"access_token_url" : "http://localhost:8088/myoauth/oauth/token",
"access_token_params" : {
"scope": "my_dashboard read write",
"grant_type" : "password",
"response_type" : "authorization_code"
},
"access_token_method" : "POST",
"authorize_url" : "http://localhost:8088/myoauth/oauth/authorize"
}
}
]
A nice gentleman suggested that I have somehow disabled the servlet handler for /oauth/confirm_access, but I am not sure how to check on that or fix such a problem.
Do you know what is going on here, what I can do to fix this or where I can start looking?
Thanks,
Matt

Spring Mongo Log4j customize

How can I customize the Spring log4j output into the Mongo datastore?
I was able to follow the Spring's example on how to use MongoLog4j. The logs are being persisted into mongodb but whatever is in my conversion pattern is not respected. My desire is to store the line number in the log message.
Here's my log4j property file
log4j.rootCategory=INFO, stdout
log4j.appender.stdout=org.springframework.data.mongodb.log4j.MongoLog4jAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d %p [%c] [%L] - <%m>%n
log4j.appender.stdout.host = localhost
log4j.appender.stdout.port = 27017
log4j.appender.stdout.database = prod
log4j.appender.stdout.collectionPattern = logs
log4j.appender.stdout.applicationId = horizon
log4j.appender.stdout.warnOrHigherWriteConcern = FSYNC_SAFE
log4j.category.org.springframework.batch=DEBUG
log4j.category.org.springframework.data.document.mongodb=DEBUG
log4j.category.org.springframework.transaction=INFO
Below is what is being stored in Mongo.
{ "_id" : ObjectId("4f720482788d6140dacb0270"), "applicationId" : "test", "na
me" : "com.service.MongoTest", "level" : "DEBUG", "timestamp
" : ISODate("2012-03-27T18:18:42.981Z"), "properties" : { "applicationId" : "test" }, "message" : "Debug TEST3" }
Looking at Spring's source code, it doesn't seem to be implemented. Instead I found another project that has line numbers and custom conversion patterns implemented. The project is
http://log4mongo.org/

Resources