Settings settings = Settings.builder()
.put("client.transport.ignore_cluster_name", false)
.put("client.transport.sniff", true)
.put("cluster.name", "TESTCULSTER").build();
TransportClient client = new PreBuiltTransportClient(settings)
.addTransportAddress(new TransportAddress(InetAddress.getByName("127.0.0.1"), 9300));
// ClusterAdminClient clusterAdminClient = client.admin().cluster();
ClusterHealthResponse healths = client.admin().cluster().prepareHealth().get();
String clusterName = healths.getClusterName();
System.out.println(clusterName);
I am geeting this error
Exception in thread "main" NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{zJ52yLDcR82UUQ7j-oxm6w}{127.0.0.1}{127.0.0.1:9300}]]
You want to connect to an elasticsearch via Java, right?
I suggest to use the HTTP transport (port 9200) rather than 9300
You enabled "sniffing", that means Java client will try to connect to each nodes directly, be sure communication is possible
Be sure you can curl your ES node from where you run the Java client
Related
I am building a distributed workflow orchestrator, grpc is used to communicate with the server cluster by workers.If a new server is added to the server grpc client is not able to detect this change. However i have done a workaround by adding a max connection age to the server options
grpc.KeepaliveParams(keepalive.ServerParameters{
MaxConnectionAge: time.Minute * 1,
})
We have two implementation of workers, one in golang and other in java this workaround works perfectly in golang client. Every minute the client makes new connection and is able to detect new servers in cluster. But this is not working with java client.
public CustomNameResolverFactory(String host, int port) {
ManagedChannel managedChannel = NettyChannelBuilder
.forAddress(host, port)
.withOption( ChannelOption.CONNECT_TIMEOUT_MILLIS, 10000 )
.usePlaintext().build();
GetServersRequest request = GetServersRequest.newBuilder().build();
GetServersResponse servers = TaskServiceGrpc.newBlockingStub(managedChannel).getServers(request);
List<Server> serversList = servers.getServersList();
System.out.println(servers);
LOGGER.info("found servers {}", servers);
for (Server server : serversList) {
String rpcAddr = server.getRpcAddr();
String[] split = rpcAddr.split(":");
String hostName = split[0];
int portN = Integer.parseInt(split[1]);
addresses.add(new EquivalentAddressGroup(new InetSocketAddress(hostName, portN)));
}
}
Java client code- https://github.com/Mohitkumar/orchy-worker-java/blob/master/src/main/java/com/orchy/client/CustomNameResolverFactory.java
Golang client code- https://github.com/Mohitkumar/orchy/blob/main/worker/lb/resolver.go
I am trying to connect to aws DocumentDB with async mongoClient.
I created a DocumentDB cluster in aws and success connect via ssh command line.
I went over here and created MongoClient and success connected and insert events.
But when I tried create com.mongodb.async.client.MongoClient, connection failed with folowing error:
No server chosen by WritableServerSelector from cluster description
ClusterDescription{type=REPLICA_SET, connectionMode=MULTIPLE,
serverDescriptions=[ServerDescription{address=aws-cluster:27017,
type=UNKNOWN, state=CONNECTING,
exception={com.mongodb.MongoSocketReadTimeoutException: Timeout while
receiving message}, caused by
{io.netty.handler.timeout.ReadTimeoutException}}]}. Waiting for 30000
ms before timing out.
ClusterSettings clusterSettings = ClusterSettings.builder()
.applyConnectionString(new ConnectionString(connectionString)).build();
List<MongoCredential> credentials = new ArrayList<>();
credentials.add(
MongoCredential.createCredential(
mongoUserName,
mongoDBName,
mongoPassword));
MongoClientSettings settings = MongoClientSettings.builder()
.credentialList(credentials)
.clusterSettings(clusterSettings)
.streamFactoryFactory(new NettyStreamFactoryFactory())
.writeConcern(WriteConcern.ACKNOWLEDGED)
.build();
com.mongodb.async.client.MongoClient mongoClient = MongoClients.create(settings);
MongoDatabase testDB = mongoClient.getDatabase("myDB");
MongoCollection<Document> collection = testDB.getCollection("test");
Document doc = new Document("name", "MongoDB").append("type", "database");
//**trying insert document => here I got an error**
collection.insertOne(doc, new SingleResultCallback<Void>() {
#Override
public void onResult(final Void result, final Throwable t) {
System.out.println("Inserted!");
}
});
Do you have any ideas, why does it happen?
I solved it by using uri:
String uri = "mongodb://<username>:<Password>#<hostname>:27017/?ssl=true&ssl_ca_certs=cert";
MongoClientSettings settings = MongoClientSettings.builder()
.streamFactoryFactory(new NettyStreamFactoryFactory())
.applyConnectionString(new ConnectionString(uri))
.build();
com.mongodb.async.client.MongoClient mongoClient = MongoClients.create(settings);
I encountered a similar error , for me it was related to the TLS configs.
I disabled the TLS in documentDB https://docs.aws.amazon.com/documentdb/latest/developerguide/security.encryption.ssl.html
In my case I had to restart the cluster after disabling the TLS. (TLS was not needed for the use case). After the restart the connection was established successfully.
Timeout Without Using Proxy
I start netcat in my local as follows, which basically listens to connections on port 9090:
netcat -l -p 9090
And using Apache HttpComponents, I create a connection to it with a timeout of 4 seconds..
RequestConfig requestConfig = RequestConfig.custom()
.setSocketTimeout(4000)
.setConnectTimeout(4000)
.setConnectionRequestTimeout(4000)
.build();
HttpGet httpget = new HttpGet("http://127.0.0.1:9090");
httpget.setConfig(requestConfig);
try (CloseableHttpResponse response = HttpClients.createDefault().execute(httpget)) {}
In terminal (where I have netcat running) I see:
??]?D???;#???9?Mۡ?NR?w?{)?V?$?(=?&?*kj?
?5??98?#?'<?%?)g#? ?/??32?,?+?0??.?2???/??-?1???D
<!-- 4 seconds later -->
read(net): Connection reset by peer
In client side what I see is:
Exception in thread "main" org.apache.http.conn.ConnectTimeoutException:
Connect to 127.0.0.1:9090 [/127.0.0.1] failed: Read timed out
This is all expected.
Timeout Using Proxy
I change the client code slightly and configure a proxy, following the docs here.
RequestConfig requestConfig = RequestConfig.custom()
.setSocketTimeout(4000)
.setConnectTimeout(4000)
.setConnectionRequestTimeout(4000)
.build();
HttpHost proxy = new HttpHost("127.0.0.1", 9090);
DefaultProxyRoutePlanner routePlanner = new DefaultProxyRoutePlanner(proxy);
CloseableHttpClient httpclient = HttpClients.custom()
.setRoutePlanner(routePlanner)
.build();
HttpGet httpget = new HttpGet("https://127.0.0.1:9090");
httpget.setConfig(requestConfig);
try (CloseableHttpResponse response = httpclient.execute(httpget)) {}
And again start netcat, and this time on serverside
CONNECT 127.0.0.1:9090 HTTP/1.1
Host: 127.0.0.1:9090
User-Agent: Apache-HttpClient/4.4.1 (Java/1.8.0_212)
But timeout is not working for CONNECT. I just wait forever..
How can I configure the httpclient to timeout for 4 seconds just like in the first case I described?
RequestConfig only take effect once a connection to the target via the specific route has been fully established . They do not apply to the SSL handshake or any CONNECT requests that take place prior to the main message exchange.
Configure socket timeout at the ConnectionManager level to ensure connection level operations time out after a certain period of inactivity.
One possibility:
// This part is the same..
httpget.setConfig(requestConfig);
ExecutorService executorService = Executors.newSingleThreadExecutor();
Callable<CloseableHttpResponse> callable = () -> {
try (CloseableHttpResponse response = httpclient.execute(httpget)) {
return response;
}
};
Future<CloseableHttpResponse> future = executorService.submit(callable);
try {
future.get(4, TimeUnit.SECONDS);
} catch (InterruptedException | ExecutionException | TimeoutException e) {
httpget.abort();
executorService.shutdownNow();
}
But I am open to other suggestions..
If elasticsearch runs on single mode, I can easily establish the RestHighLevel connection with this line of code:
RestHighLevelClient client = new RestHighLevelClient(
RestClient.builder(
new HttpHost("localhost", 9200, "http"),
new HttpHost("localhost", 9201, "http")));
But if my elastic cluster has 3 machines, e.g., "host1", "host2", "host3", how to create the rest high level client in cluster mode ?
Thanks
RestHighLevelClient client = new RestHighLevelClient(
RestClient.builder(
new HttpHost("host1", 9200, "http"),
new HttpHost("host2", 9200, "http"),
new HttpHost("host2", 9200, "http")
)
);
As the doc it looks like you were referencing states, RestClient.builder accepts an array of HttpHosts to connect to. The client (which under the hood is the ES low-level REST client) will round-robin requests to these hosts. See also the Javadoc.
As per the Elasticsearch docs you can pass multiple Elasticsearch hosts in RestClient.builder().
The better solution is to load the Elasticsearch hosts from configuration(application.conf in case of Scala-based application) instead of hardcoding it in the codebase.
Here is the Scala-based solution using Java Varargs(:_*).
application.conf
es_hosts = ["x.x.x.x","x.x.x.x","x.x.x.x"] // You can even use service-name/service-discovery
es_port = 9200
es_scheme = "http"
Code snippet
import collection.JavaConverters._
import com.typesafe.config.ConfigFactory
import org.apache.http.HttpHost
import org.elasticsearch.client.{RestClient, RestHighLevelClient}
val config = ConfigFactory.load()
val port = config.getInt(ES_PORT)
val scheme = config.getString(ES_SCHEME)
val es_hosts = config.getStringList(ES_HOSTS).asScala
val httpHosts = es_hosts.map(host => new HttpHost(host, port, scheme))
val low_level_client = RestClient.builder(httpHosts:_*)
val high_level_client: RestHighLevelClient = new RestHighLevelClient(low_level_client)
To create High level REST client using multiple hosts, you can do something like following:
String[] esHosts = new String[]{"node1-example.com:9200", "node2-example.com:9200",
"node3-example.com:9200"};
final ClientConfiguration clientConfiguration = ClientConfiguration.builder()
.connectedTo(esHosts)
.build();
RestHighLevelClient restClient = RestClients.create(clientConfiguration).rest();
// Hostnames used for building client can be verified as following
List<Node> nodes = restClient.getLowLevelClient().getNodes();
nodes.forEach(node -> System.out.println(node.toString()));
References:
Docs for High Level REST Client
Source code for ClientConfigurationBuilder
I am creating my elastic REST client with below steps:
RestHighLevelClient client=null;
List<HttpHost> hostList = new ArrayList<>();
for (String host : hosts) {
String[] hostDetails = host.split("\\:");hostList.add(new
HttpHost(hostDetails[0],Integer.parseInt(hostDetails[1]),https));
}
try(RestHighLevelClient client1 = new RestHighLevelClient(
RestClient.builder(hostList.toArray(new
HttpHost[hostList.size()]))
.setHttpClientConfigCallback(
httpClientBuilder ->
// to do this only if auth is enabled
httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider))))
{
client = client1;
}
} catch (IOException e) {
log.error("exception occurred while setting elastic client");
}
I followed steps mentioned here to secure my local ES installation using SearchGuard (no tag exist for it on SO). Now, it is reachable via Postman only through basic authentication with username password as default admin/admin.
Now, I need to allow my Spring Data ES project to be able to access this ES installation.
I tried:
Settings esSettings = Settings.settingsBuilder()
.put("path.home", ".")
.put("cluster.name", clusterName)
.put("searchguard.ssl.transport.enabled", true)
.put("searchguard.ssl.transport.keystore_filepath", "kirk-keystore.jks")
.put("searchguard.ssl.transport.truststore_filepath", "truststore.jks")
.put("searchguard.ssl.transport.enforce_hostname_verification", false)
.put("request.headers.sg.impersonate.as", "admin")
.build();
TransportClient client = TransportClient.builder().settings(esSettings)
.build().addTransportAddress(
new InetSocketTransportAddress(InetAddress.getByName(elasticsearchHost), elasticsearchPort));
client.prepareGet().putHeader("Authorization", "Basic " + Base64.encodeBase64("admin:admin".getBytes())).get();
return client;
Added header as suggested here.
But all I get is:
[elasticsearch[Meteorite][generic][T#3]] INFO org.elasticsearch.client.transport -
[Meteorite] failed to get node info for {#transport#-1}{127.0.0.1}{127.0.0.1:9300}, disconnecting...
org.elasticsearch.transport.NodeDisconnectedException: [][127.0.0.1:9300][cluster:monitor/nodes/liveness] disconnected
I need to get and post new data in ES (2.4.4).