path.home is not configured in elasticsearch - elasticsearch

Exception in thread "main" java.lang.IllegalStateException: path.home is not configured
at org.elasticsearch.env.Environment.(Environment.java:101)
at org.elasticsearch.node.internal.InternalSettingsPreparer.prepareEnvironment(InternalSettingsPreparer.java:81)
at org.elasticsearch.node.Node.(Node.java:128)
at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:145)
at org.elasticsearch.node.NodeBuilder.node(NodeBuilder.java:152)
at JavaAPIMain.main(JavaAPIMain.java:43)
//adding document to elasticsearch using java
Node node = nodeBuilder().clusterName("myapplication").node();
Client client = node.client();
client.prepareIndex("kodcucom", "article", "1")
.setSource(putJsonDocument("ElasticSearch: Java",
"ElasticSeach provides Java API, thus it executes all operations " +
"asynchronously by using client object..",
new Date(),
new String[]{"elasticsearch"},
"Hüseyin Akdoğan")).execute().actionGet();

How about trying this one:
NodeBuilder.nodeBuilder()
.settings(Settings.builder()
.put("path.home", "/path/to/elasticsearch/home/dir")
.node();
Credits: https://github.com/elastic/elasticsearch/issues/15325
Always ask Google about your error message first. There are more than 5k results for your problem.

if you are using intellij or eclipse,
edit configuration and add the below line in your VMoptions
-Des.path.home={dropwizard installation directory}
for example in my mac
-Des.path.home=/Users/supreeth.vp/elasticsearch-2.3.4/bin

Related

Serializer for custom type 'janusgraph.RelationIdentifier' not found

Janus Server config
serializers:
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphBinaryMessageSerializerV1, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphBinaryMessageSerializerV1, config: { serializeResultToString: true }}
Java/Spring boot config
#Bean
public Cluster cluster() {
return Cluster.build()
.addContactPoint(dbUrl)
.port(dbPort)
.serializer(new GraphBinaryMessageSerializerV1())
.maxConnectionPoolSize(5)
.maxInProcessPerConnection(1)
.maxSimultaneousUsagePerConnection(10)
.create();
}
Getting the following error,
Caused by: org.apache.tinkerpop.gremlin.driver.ser.SerializationException: java.io.IOException: Serializer for custom type 'janusgraph.RelationIdentifier' not found
at org.apache.tinkerpop.gremlin.driver.ser.binary.ResponseMessageSerializer.readValue(ResponseMessageSerializer.java:59) ~[gremlin-driver-3.6.1.jar:3.6.1]
at org.apache.tinkerpop.gremlin.driver.ser.GraphBinaryMessageSerializerV1.deserializeResponse(GraphBinaryMessageSerializerV1.java:180) ~[gremlin-driver-3.6.1.jar:3.6.1]
at org.apache.tinkerpop.gremlin.driver.handler.WebSocketGremlinResponseDecoder.decode(WebSocketGremlinResponseDecoder.java:47) ~[gremlin-driver-3.6.1.jar:3.6.1]
at org.apache.tinkerpop.gremlin.driver.handler.WebSocketGremlinResponseDecoder.decode(WebSocketGremlinResponseDecoder.java:35) ~[gremlin-driver-3.6.1.jar:3.6.1]
Note: I didn't define the schema. Migrating the code from AWS NEPTUNE (working code) to JanusGraph.
Any help on why I am getting the above error?
Get queries are working and a few mutations queries also working,...
It looks like you have only defined the serializer for JanusGraph types on the server, but not on the client side. You also need to add the JanusGraphIoRegistry on the client side.
This can be done like this:
TypeSerializerRegistry typeSerializerRegistry = TypeSerializerRegistry.build()
.addRegistry(JanusGraphIoRegistry.instance())
.create();
Cluster.build()
.addContactPoint(dbUrl)
.port(dbPort)
.serializer(new GraphBinaryMessageSerializerV1(typeSerializerRegistry))
.maxConnectionPoolSize(5)
.maxInProcessPerConnection(1)
.maxSimultaneousUsagePerConnection(10)
.create();
or you can alternatively use a config file for it which simplifies the code down to:
import static org.apache.tinkerpop.gremlin.process.traversal.AnonymousTraversalSource.traversal;
GraphTraversalSource g = traversal().withRemote("conf/remote-graph.properties");
(I have already created the GraphTraversalSource here because the client is directly created internally by withRemote().)
This is also described in the JanusGraph documentation under Connecting from Java. Note that I've linked to a version of the docs for the upcoming 1.0.0 release because the documentation for the latest released version right now still uses Gryo instead of GraphBinary. But you can still use this with JanusGraph 0.6 already and it also makes sense to use GraphBinary instead of Gryo, because support for Gryo will be dropped in version 1.0.0.
The config file conf/remote-graph.properties looks then like this (also taken from the JanusGraph documentation):
hosts: [localhost]
port: 8182
serializer: {
className: org.apache.tinkerpop.gremlin.driver.ser.GraphBinaryMessageSerializerV1,
config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
You can also specify the various options that you are currently specifying via the builder. This configuration is documented in the TinkerPop reference docs.

Break in fabric8 kubernetes client mock server

We were using fabric8 kubernetes client 5.3.x for watcher and it worked fine. Recently when we moved to 5.11.2 there were many changes observed and eventually the JUnit Tests started failing.
We use io.fabric8.kubernetes.client.server.mock.KubernetesServer
Earlier we were using ContainerStatus .withNewReady which now seem to be removed.
And then we added the following annotation
#Rule
public KubernetesServer myMockServer = new KubernetesServer(false, true);
After this, we are getting the following logs stating unsupported label requirement while the application code is sending this label.
[2022-02-03T07:30:22.733Z] 07:30:17.812 [pool-1-thread-1] DEBUG io.fabric8.kubernetes.client.dsl.internal.AbstractWatchManager - Watching http://localhost:40033/api/v1/namespaces/test/pods?labelSelector=app.kubernetes.io%2Fname%20in%20%28apps%29&timeoutSeconds=0&allowWatchBookmarks=true&watch=true...
[2022-02-03T07:30:22.733Z] 07:30:17.814 [MockWebServer /127.0.0.1:49882] WARN io.fabric8.kubernetes.client.server.mock.KubernetesAttributesExtractor - Ignoring unsupported label requirement: app.kubernetes.io/name in (apps)
[2022-02-03T07:30:22.733Z] 07:30:17.815 [MockWebServer /127.0.0.1:49882] DEBUG io.fabric8.kubernetes.client.server.mock.KubernetesAttributesExtractor - fromPath /api/v1/namespaces/test/pods?labelSelector=app.kubernetes.io%2Fname%20in%20%28apps%29&timeoutSeconds=0&allowWatchBookmarks=true&watch=true : {attributes: {namespace={key:namespace, value:test}, version={key:version, value:v1}, plural={key:plural, value:pods}}}
[2022-02-03T07:30:22.733Z] 07:30:17.815 [OkHttp http://localhost:40033/...] DEBUG io.fabric8.kubernetes.client.dsl.internal.WatcherWebSocketListener - WebSocket successfully opened
[2022-02-03T07:30:22.733Z] 07:30:20.818 [OkHttp http://localhost:40033/...] DEBUG io.fabric8.kubernetes.client.dsl.internal.AbstractWatchManager - Scheduling reconnect task
Is there something more should be done? Tests are in ERROR state and not Failed.
Sample JUnit program
public void testAddNewPodWatchEvent()
{
//given
doReturn(myClientMock).when(myTestObj).getClient();
doReturn(myWatcherSpy).when(myTestObj).createEventHandler();
String PATH =
"/api/v1/namespaces/test/pods?labelSelector=app.kubernetes.io%2Fname%20in%20%28apps%29&timeoutSeconds=0&watch=true";
Map<String, String> mockLabelMap = new HashMap<>();
mockLabelMap.put("foo", "testlabel");
mockLabelMap.put("app.kubernetes.io/name", "apps");
Pod accPod = createAppsPod(mockLabelMap, true);
myMockServer.expect()
.get()
.withPath(PATH)
.andUpgradeToWebSocket()
.open()
.waitFor(100)
.andEmit(new WatchEvent(accPod, "ADDED"))
.done()
.once();
//when
myTestObj.activate(mockProps);
sleepForWatchToBeInvoked();
//then
verify(myWatcherSpy, atLeastOnce()).eventReceived(Action.ADDED, accPod);
}

ElasticsearchStatusException contains unrecognized parameter: [ccs_minimize_roundtrips]]]

I am trying to do a simple search on ElasticSearch server and getting teh following error
ElasticsearchStatusException[Elasticsearch exception [type=illegal_argument_exception, reason=request [/recordlist1/_search] contains unrecognized parameter: [ccs_minimize_roundtrips]]]
The query String :
{"query":{"match_all":{"boost":1.0}}}
I am using :
elasticsearch-rest-high-level-client (maven artifact)
SearchRequest searchRequest = new SearchRequest(INDEX);
SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
searchSourceBuilder.query(QueryBuilders.matchAllQuery());
searchRequest.source(searchSourceBuilder);
try
{
System.out.print(searchRequest.source());
SearchResponse response = getConnection().search(searchRequest,RequestOptions.DEFAULT);
SearchHit[] results=response.getHits().getHits();
for(SearchHit hit : results)
{
String sourceAsString = hit.getSourceAsString();
System.out.println( gson.fromJson(sourceAsString, Record.class).year);
}
}
catch(ElasticsearchException e)
{
e.getDetailedMessage();
e.printStackTrace();
}
catch (java.io.IOException ex)
{
ex.getLocalizedMessage();
ex.printStackTrace();
}
This usually occurs on porting from elastic-search version 6.X.X to 7.X.X.
You should reduce the elastic-search version to 6.7.1 and try running it.
Since you are using maven you should make sure your dependencies should be like:
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch</artifactId>
<version>6.7.1</version>
</dependency>
I ran into this same issue when i had by mistake my 6.5 cluster still running while using the 7.2 API. Once I started up my 7.2 cluster the exception went away.
Problem here is the movement of version, probably you were using elastic search 6.x.x and now using 7.x.x
You can definitely solve this by having your elastic search server of 7.x.x.
Elasticsearch 6.x.x used to have type of document
(where you could give type to your documents)
but Elasticsearch 7.x.x onwards it has no type or
default type _doc, so you need to have _doc as your type
while creating mapping.
Maybe you can find this from stackTrace of exception:
Suppressed: org.elasticsearch.client.ResponseException: method [POST], host [http://127.0.0.1:9200], URI [/recordlist1/_search?rest_total_hits_as_int=true&typed_keys=true&ignore_unavailable=false&expand_wildcards=open%2Cclosed&allow_no_indices=true&ignore_throttled=false&search_type=query_then_fetch&batched_reduce_size=512], status line [HTTP/1.1 400 Bad Request]
{"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"request [/_search] contains unrecognized parameters: [ignore_throttled], [rest_total_hits_as_int]"}],"type":"illegal_argument_exception","reason":"request [/_search] contains unrecognized parameters: [ignore_throttled], [rest_total_hits_as_int]"},"status":400}
So, You can try this GET method by curl, which come to the same error message.
curl -XGET http://127.0.0.1:9200/recordlist1/_search?rest_total_hits_as_int=true&typed_keys=true&ignore_unavailable=false&expand_wildcards=open%2Cclosed&allow_no_indices=true&ignore_throttled=false&search_type=query_then_fetch&batched_reduce_size=512
I've tried delete 'rest_total_hits_as_int=true' ... Case Closed.
You should check your es-server's version by elasticsearch -V and client’s version in maven.
In high-level client, they add rest_total_hits_as_int=true by default, and I find no access to set it to false.
you can refer to
org.elasticsearch.client.RequestConverters#addSearchRequestParams Line:395 <v6.8.10>
I had no other choice but matching client to match server.
Why it's so Exciting ?
ehn... after all, it is "High Level".

Search Guard With Spring Data ES

I followed steps mentioned here to secure my local ES installation using SearchGuard (no tag exist for it on SO). Now, it is reachable via Postman only through basic authentication with username password as default admin/admin.
Now, I need to allow my Spring Data ES project to be able to access this ES installation.
I tried:
Settings esSettings = Settings.settingsBuilder()
.put("path.home", ".")
.put("cluster.name", clusterName)
.put("searchguard.ssl.transport.enabled", true)
.put("searchguard.ssl.transport.keystore_filepath", "kirk-keystore.jks")
.put("searchguard.ssl.transport.truststore_filepath", "truststore.jks")
.put("searchguard.ssl.transport.enforce_hostname_verification", false)
.put("request.headers.sg.impersonate.as", "admin")
.build();
TransportClient client = TransportClient.builder().settings(esSettings)
.build().addTransportAddress(
new InetSocketTransportAddress(InetAddress.getByName(elasticsearchHost), elasticsearchPort));
client.prepareGet().putHeader("Authorization", "Basic " + Base64.encodeBase64("admin:admin".getBytes())).get();
return client;
Added header as suggested here.
But all I get is:
[elasticsearch[Meteorite][generic][T#3]] INFO org.elasticsearch.client.transport -
[Meteorite] failed to get node info for {#transport#-1}{127.0.0.1}{127.0.0.1:9300}, disconnecting...
org.elasticsearch.transport.NodeDisconnectedException: [][127.0.0.1:9300][cluster:monitor/nodes/liveness] disconnected
I need to get and post new data in ES (2.4.4).

Building a JMX client in a servlet installed on the Deployment Manager

I'm building a monitoring application as a servlet running on my websphere 7 ND deployment manager. The tool uses JMX to query the deployment manager for various data. Global Security is enabled on the dmgr.
I'm having problems getting this to work however. My first attempt was to use the websphere client code:
String sslProps = "file:" + base +"/properties/ssl.client.props";
System.setProperty("com.ibm.SSL.ConfigURL", sslProps);
String soapProps = "file:" + base +"/properties/soap.client.props";
System.setProperty("com.ibm.SOAP.ConfigURL", pp);
Properties connectProps = new Properties();
connectProps.setProperty(AdminClient.CONNECTOR_TYPE, AdminClient.CONNECTOR_TYPE_SOAP);
connectProps.setProperty(AdminClient.CONNECTOR_HOST, dmgrHost);
connectProps.setProperty(AdminClient.CONNECTOR_PORT, soapPort);
connectProps.setProperty(AdminClient.CONNECTOR_SECURITY_ENABLED, "true");
AdminClient adminClient = AdminClientFactory.createAdminClient(connectProps) ;
This results in the following exception:
Caused by: com.ibm.websphere.management.exception.ConnectorNotAvailableException: ADMC0016E: The system cannot create a SOAP connector to connect to host ssunlab10.apaceng.net at port 13903.
at com.ibm.ws.management.connector.soap.SOAPConnectorClient.getUrl(SOAPConnectorClient.java:1306)
at com.ibm.ws.management.connector.soap.SOAPConnectorClient.access$300(SOAPConnectorClient.java:128)
at com.ibm.ws.management.connector.soap.SOAPConnectorClient$4.run(SOAPConnectorClient.java:370)
at com.ibm.ws.security.util.AccessController.doPrivileged(AccessController.java:118)
at com.ibm.ws.management.connector.soap.SOAPConnectorClient.reconnect(SOAPConnectorClient.java:363)
... 22 more
Caused by: java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333)
at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366)
at java.net.Socket.connect(Socket.java:519)
at java.net.Socket.connect(Socket.java:469)
at java.net.Socket.<init>(Socket.java:366)
at java.net.Socket.<init>(Socket.java:209)
at com.ibm.ws.management.connector.soap.SOAPConnectorClient.getUrl(SOAPConnectorClient.java:1286)
... 26 more
So, I then tried to do it via RMI, but adding in the sas.client.properties to the environment, and setting the connectort type in the code to CONNECTOR_TYPE_RMI. Now though I got a NameNotFoundException out of CORBA:
Caused by: javax.naming.NameNotFoundException: Context: , name: JMXConnector: First component in name JMXConnector not found. [Root exception is org.omg.CosNaming.NamingContextPackage.NotFound: IDL:omg.org/CosNaming/NamingContext/NotFound:1.0]
To see if it was an IBM issue, I tried using the standard JMX connector as well with the same result (substitute AdminClient for JMXConnector in the above error)
JMXServiceURL url = new JMXServiceURL("service:jmx:rmi:///jndi/JMXConnector");
Hashtable h = new Hashtable();
String providerUrl = "corbaloc:iiop:" + dmgrHost + ":" + rmiPort + "/WsnAdminNameService";
h.put(Context.PROVIDER_URL, providerUrl);
// Specify the user ID and password for the server if security is enabled on server.
String[] credentials = new String[] { "***", "***" };
h.put("jmx.remote.credentials", credentials);
// Establish the JMX connection.
JMXConnector jmxc = JMXConnectorFactory.connect(url, h);
// Get the MBean server connection instance.
mbsc = jmxc.getMBeanServerConnection();
At this point, in desperation I wrote a wsadmin sccript to run both the RMI and SOAP methods. To my amazement, this works fine. So my question is, why does the code not work in a servlet installed on the dmgr ?
regards,
Trevor
For the SOAP error, the ConnectException looks like the wrong SOAP host/port was used for the dmgr. I would double-check the server logs for the SOAP port. For the RMI error (NameNotFoundException), it looks like you're trying to use JMXConnectorFactory, which isn't supported by WAS.
If your application is installed on the dmgr, it's probably easiest to just use AdminServiceFactory.getAdminService to get an in-process reference to the AdminService rather than trying to open a new connection to the same process:
http://publib.boulder.ibm.com/infocenter/wasinfo/fep/topic/com.ibm.websphere.javadoc.doc/web/apidocs/com/ibm/websphere/management/AdminServiceFactory.html

Resources