Unable to create elasticsearch index from neo4j - elasticsearch

I want to create elastic search indexes on neo4j data.
I reffered https://github.com/neo4j-contrib/neo4j-elasticsearch and https://www.youtube.com/watch?v=SJLSFsXgOvA&ab_channel=AnmolAgrawal to create elasticsearch index from neo4j.
But after that, im getting below error in neo4j.log file.
2016-11-08 12:20:09.825+0000 WARN Error updating ElasticSearch No Server is assigned to client to connect
io.searchbox.client.config.exception.NoServerConfiguredException: No Server is assigned to client to connect
at io.searchbox.client.AbstractJestClient$ServerPool.getNextServer(AbstractJestClient.java:132)
at io.searchbox.client.AbstractJestClient.getNextServer(AbstractJestClient.java:81)
at io.searchbox.client.http.JestHttpClient.prepareRequest(JestHttpClient.java:80)
at io.searchbox.client.http.JestHttpClient.executeAsync(JestHttpClient.java:60)
at org.neo4j.elasticsearch.ElasticSearchEventHandler.afterCommit(ElasticSearchEventHandler.java:81)
at org.neo4j.elasticsearch.ElasticSearchEventHandler.afterCommit(ElasticSearchEventHandler.java:27)
at org.neo4j.kernel.internal.TransactionEventHandlers.afterCommit(TransactionEventHandlers.java:149)
at org.neo4j.kernel.internal.TransactionEventHandlers.afterCommit(TransactionEventHandlers.java:47)
at org.neo4j.kernel.impl.api.TransactionHooks.afterCommit(TransactionHooks.java:75)
at org.neo4j.kernel.impl.api.KernelTransactionImplementation.afterCommit(KernelTransactionImplementation.java:541)
at org.neo4j.kernel.impl.api.KernelTransactionImplementation.commit(KernelTransactionImplementation.java:482)
at org.neo4j.kernel.impl.api.KernelTransactionImplementation.close(KernelTransactionImplementation.java:380)
at org.neo4j.server.rest.transactional.TransitionalTxManagementKernelTransaction.commit(TransitionalTxManagementKernelTransaction.java:92)
at org.neo4j.server.rest.transactional.TransactionHandle.closeContextAndCollectErrors(TransactionHandle.java:243)
at org.neo4j.server.rest.transactional.TransactionHandle.commit(TransactionHandle.java:151)
at org.neo4j.server.rest.web.TransactionalService.lambda$executeStatementsAndCommit$29(TransactionalService.java:202)
at com.sun.jersey.core.impl.provider.entity.StreamingOutputProvider.writeTo(StreamingOutputProvider.java:71)
at com.sun.jersey.core.impl.provider.entity.StreamingOutputProvider.writeTo(StreamingOutputProvider.java:57)
at com.sun.jersey.spi.container.ContainerResponse.write(ContainerResponse.java:302)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1510)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
How to fix this error or is there any other way to update index if neo4j node's property value changes?

As this is one of the very few page that appears when you search Google for this string, I wanted to post a clear (one that J. Dimeo's answer above alludes to, but is far from specific).
In your graylog config (/etc/graylog/server/server.conf for me), set elasticsearch_discovery_enabled to false, and resart the service.
That's it :)

Are you using AWS ElasticSearch? They do not allow connecting to individual nodes. I read elsewhere (from the AWS team):
"Looking over the logs, it seems that 'i.s.c.config.discovery.NodeChecker' is trying to auto discover and connect to the individual nodes of the cluster. Amazon is continuously working hard on improving the service features but unfortunately, at this moment AWS doesn't allow clients to connect to the individual nodes of the cluster. Instead, you can connect using the URL"
You need to turn off node discovery in the Jest client somehow:
ClientConfig clientConfig = new ClientConfig.Builder("http://localhost:9200").discoveryEnabled(false)
See https://github.com/searchbox-io/Jest/blob/master/jest/README.md#node-discovery-through-nodes-api

Related

Unable Configure Oracle RDS instance in AWS Elastic Beanstalk

I want to configure an Oracle Se instance on the elastic beanstalk environment, and I have chosen the oracle-se from the selection box, but the related select option of version and instance class does not be updated.
I have chosen the db.t2.micro (due to free tier usage), and it shows I have selected the db.m1.small. Then, it keeps prompting the error message when I save that configuration.
For example:
Unable to retrieve RDS configuration options.
Configuration validation exception: Invalid option value:
'db.m1.small' (Namespace: 'aws:rds:dbinstance', OptionName:
'DBInstanceClass'): DBInstanceClass db.m1.small not supported for
oracle-se1 db engine.
Sample image with related error message
I have also searched the other stack overflow forum like Unable to add an RDS instance to Elastic Beanstalk, and those forum has stated that AWS has resolved this problem, but it does not work for me.

What details do I need to GET data from elasticsearch cluster?

My team has data stored on ElasticSearch and have given me an API key, the URL of a remote cluster, and a username/password combination (to what I dont know) to GET data.
How do I use this API key to get data from the ElasticSearch cluster with Python? I've looked through the docs, but none include the use of a raw API key and most involve localhost, not a remote host in my case.
Surely I need to know the names of nodes or indexes at least? For what would I need the username/password combo for? There must be more details I need to connect with than what I've been given?
We're moving from Node.js+couchbase work to ElasticSearch+Python so I'm more than a bit lost.
TYIA
Most probably x-pack basic security is enabled in your Elasticsearch(ES) cluster, which you can check by hitting http::9200, if it ask for username/password then you can provide what you have.
Please refer x-pack page for more info.
In short, its used to secure your cluster and indices and there are various types of authentication and basic auth(which requires username/password) is the one your team might be using.

Not able to insert index into ElasticSearch DB container on AKS

I have installed Elasticsearch and Kibana as containers in AKS. Following is how the services are looking like:
I am able to see that both the services are up and running by hitting the external IP addresses. But the problem is I am not sure if Kibana is able to get connected to Elasticsearch or not. How do I check that? Because when I do not get a successful response if I hit the below url:
I am using the below code to get the logs from my Azure LogAnalytics workspace and insert into ElasticSearch DB:
private static void UploadLogToElasticSearchDB(Microsoft.Azure.OperationalInsights.Models.Table dt)
{
ElasticClient client = null;
var uri = new Uri("http://13.87.227.42:9200/");
var settings = new ConnectionSettings(uri);
client = new ElasticClient(settings);
settings.DefaultIndex("k8scontainercpu");
for(int i = 0; i < dt.Rows.Count; i++)
{
var dtRowJSON = JsonConvert.SerializeObject(dt.Rows[i]);
client.IndexAsync<string>(dtRowJSON, null);
}
}
This program is running infinitely and not inserting any records, it is not giving any errors also, I do not see anything unusual in the Output window of the program. How to insert indexes in the elasticsearch DB of AKS?
If you able to connect using external IP and port, then the service is working correctly. Outside the cluster internal service name won't be accessible.
You can open the kibana external url and check if kibana is able to connect to elastic search or not. if kibana is not able to connect to elastic search it would be visible in the health status of kibana. However if you are able to connect to elastic search externally, kibana should be able to connect with it easily.
Regarding, index creation, you can use kibana to create index also. See link on how to create index using kibana.
ES also has api to create index link
To troubleshoot what documents are not getting inserting into ES, I would suggest to use Index function (which is a sync function) and track the response of the call so thay you can identify what is happening in the call. You can read about it from link

To perform search operation through UI in Jhipster using elastic search for existing database MySQL

I started Jhipster for PoC purpose and I need to perform search through UI for already existing data present in MySQL database.I have the following doubts:
Do we need to install/have elastic search in order to run it first and check for the results?
Or choosing elastic search while creating the Jhipster application and configuring it is enough to use it further?
I have tried using generator-jhipster-elasticsearch-reindexer module by installing it. but it did not worked as expected.After installing it I ended up with the following error.
java.lang.IllegalStateException: handshake failed, mismatched cluster name [Cluster [internal-test-cluster-name2843e241-29cc-4bc0-82db-600eb78ed261]] - {127.0.0.1:9300}{pbkSwq2SQ-CTopOjTqsVcg}{127.0.0.1}{127.0.0.1:9300}
at org.elasticsearch.transport.TransportService.handshake(TransportService.java:404)
at org.elasticsearch.transport.TransportService.handshake(TransportService.java:367)
at org.elasticsearch.discovery.zen.UnicastZenPing$PingingRound.getOrConnect(UnicastZenPing.java:366)
at org.elasticsearch.discovery.zen.UnicastZenPing$3.doRun(UnicastZenPing.java:471)
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:723)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
The above error was resolved by adding sprig.data.jest.uri in application-dev.yml, but still the search mechanism is not working. i.e., It is not able to query on existing database.
Actual result: able to perform search from UI when i create an entity from the UI. Reason being when an entity is made through API then it is updating the elastic search database and producing the result.
Expected result: To able to perform the same search for already existing data when I connect to the MySQL database.
Jhipster already explained how to work with elasticsearch on their website:
https://www.jhipster.tech/using-elasticsearch/
Here is short answer for your question:
1. You do not need to install elasticsearch if you run your app in dev profile, because it uses embedded Elasticsearch instance.
2. You must select Elasticsearch open while creating Jhipster app so that the generator add search capabilities to your code.
The generator-jhipster-elasticsearch-reindexer module only works if you have enabled Elasticsearch in your app.

Heroku Mongo Url (Copying db from heroku)

I'm trying to write a script that copys my heroku db to my local db (mongodb), but I dont have a clue what kind of url format this is:
mongodb://<username>:<password>#lamppost.5.mongolayer.com:10049,lamppost.4.mongolayer.com:10049/<appname>
Why are there two urls, comma separated?
Does anyone have a working script to share? :)
The format is a MongoDB Connection String URI for a replica set.
The two hosts listed are members of the replica set provided as a "seed list" for connecting to your MongoHQ instance. Specifying more than one member of the replica set allows for failover -- the MongoDB driver will attempt to connect to the first available member of the seed list in order to discover the current configuration of a replica set.
You can use this URI to connect from Ruby via MongoClient.from_uri(uri).

Resources