I have installed Elasticsearch and Kibana as containers in AKS. Following is how the services are looking like:
I am able to see that both the services are up and running by hitting the external IP addresses. But the problem is I am not sure if Kibana is able to get connected to Elasticsearch or not. How do I check that? Because when I do not get a successful response if I hit the below url:
I am using the below code to get the logs from my Azure LogAnalytics workspace and insert into ElasticSearch DB:
private static void UploadLogToElasticSearchDB(Microsoft.Azure.OperationalInsights.Models.Table dt)
{
ElasticClient client = null;
var uri = new Uri("http://13.87.227.42:9200/");
var settings = new ConnectionSettings(uri);
client = new ElasticClient(settings);
settings.DefaultIndex("k8scontainercpu");
for(int i = 0; i < dt.Rows.Count; i++)
{
var dtRowJSON = JsonConvert.SerializeObject(dt.Rows[i]);
client.IndexAsync<string>(dtRowJSON, null);
}
}
This program is running infinitely and not inserting any records, it is not giving any errors also, I do not see anything unusual in the Output window of the program. How to insert indexes in the elasticsearch DB of AKS?
If you able to connect using external IP and port, then the service is working correctly. Outside the cluster internal service name won't be accessible.
You can open the kibana external url and check if kibana is able to connect to elastic search or not. if kibana is not able to connect to elastic search it would be visible in the health status of kibana. However if you are able to connect to elastic search externally, kibana should be able to connect with it easily.
Regarding, index creation, you can use kibana to create index also. See link on how to create index using kibana.
ES also has api to create index link
To troubleshoot what documents are not getting inserting into ES, I would suggest to use Index function (which is a sync function) and track the response of the call so thay you can identify what is happening in the call. You can read about it from link
Related
I want to send alert from Kibana whenever someone adds the document which meets the conditions. I am using Elastic Cloud Kibana version 8.5.2
Below are my rule configurations
I am indexing the document from the Dev Tools api, query is working but it doesn't send alert when I index new document, does anyone knows what is going wrong in my configuration.
Am trying to use elasticsearch with my neo4j database for fast querying.I tried many sites but they are all old articles so i didn't get any clear idea. Steps I followed until now,
Installed neo4j
Installed elasticsearch
Copy pasted elastic search plugins into neo4j plugins folder
added this line into neo4j. properties file
elasticsearch.host_name=http://localhost:9200
elasticsearch.index_spec=people:Person(first_name,last_name), places:Place(name)
Here my question is,
How elasticsearch and neo4j are integrated. Please clarify me on this.
I followed this ,
Link
You have to install Apoc procedures plugin (https://github.com/neo4j-contrib/neo4j-apoc-procedures). The documentation about ES integration is here : ES Integration with Apoc procedures
[edit]
download and drop apoc.jar in plugins's Neo4j directory, regarding the targetted Neo4j version
restart Neo4j
in Neo4j Web browser, launch the following Cypher query to show all ES procedures:
CALL apoc.help("apoc.es")
Sample query for logs:
CALL apoc.es.getRaw("localhost","_search?q=level:ERROR",null)
YIELD value
UNWIND value.hits.hits as hits
RETURN hits LIMIT 100
The recommanded way is to store the ES host in neo4j.conf by adding a key (after restart of Neo4j):
apoc.es.myKey.url=localhost
Then the query looks like:
CALL apoc.es.getRaw("myKey","_search?q=level:ERROR",null)
YIELD value
UNWIND value.hits.hits as hits
RETURN hits LIMIT 100
For those of you who already have APOC plugin installed and accessible, but don't have access to the neo4j.properties file (or are more comfortable working with ES through curl) you can do this without using apoc.es.getRaw and can instead use the JSON returned with apoc.load.json:
WITH "http://myelasticurl:9200/my_index/_search?q=level:ERROR" as search_url
CALL apoc.load.json(search_url) YIELD value
UNWIND value.hits.hits as hit
WITH hit._source as source
...
# do work
...
I want to create elastic search indexes on neo4j data.
I reffered https://github.com/neo4j-contrib/neo4j-elasticsearch and https://www.youtube.com/watch?v=SJLSFsXgOvA&ab_channel=AnmolAgrawal to create elasticsearch index from neo4j.
But after that, im getting below error in neo4j.log file.
2016-11-08 12:20:09.825+0000 WARN Error updating ElasticSearch No Server is assigned to client to connect
io.searchbox.client.config.exception.NoServerConfiguredException: No Server is assigned to client to connect
at io.searchbox.client.AbstractJestClient$ServerPool.getNextServer(AbstractJestClient.java:132)
at io.searchbox.client.AbstractJestClient.getNextServer(AbstractJestClient.java:81)
at io.searchbox.client.http.JestHttpClient.prepareRequest(JestHttpClient.java:80)
at io.searchbox.client.http.JestHttpClient.executeAsync(JestHttpClient.java:60)
at org.neo4j.elasticsearch.ElasticSearchEventHandler.afterCommit(ElasticSearchEventHandler.java:81)
at org.neo4j.elasticsearch.ElasticSearchEventHandler.afterCommit(ElasticSearchEventHandler.java:27)
at org.neo4j.kernel.internal.TransactionEventHandlers.afterCommit(TransactionEventHandlers.java:149)
at org.neo4j.kernel.internal.TransactionEventHandlers.afterCommit(TransactionEventHandlers.java:47)
at org.neo4j.kernel.impl.api.TransactionHooks.afterCommit(TransactionHooks.java:75)
at org.neo4j.kernel.impl.api.KernelTransactionImplementation.afterCommit(KernelTransactionImplementation.java:541)
at org.neo4j.kernel.impl.api.KernelTransactionImplementation.commit(KernelTransactionImplementation.java:482)
at org.neo4j.kernel.impl.api.KernelTransactionImplementation.close(KernelTransactionImplementation.java:380)
at org.neo4j.server.rest.transactional.TransitionalTxManagementKernelTransaction.commit(TransitionalTxManagementKernelTransaction.java:92)
at org.neo4j.server.rest.transactional.TransactionHandle.closeContextAndCollectErrors(TransactionHandle.java:243)
at org.neo4j.server.rest.transactional.TransactionHandle.commit(TransactionHandle.java:151)
at org.neo4j.server.rest.web.TransactionalService.lambda$executeStatementsAndCommit$29(TransactionalService.java:202)
at com.sun.jersey.core.impl.provider.entity.StreamingOutputProvider.writeTo(StreamingOutputProvider.java:71)
at com.sun.jersey.core.impl.provider.entity.StreamingOutputProvider.writeTo(StreamingOutputProvider.java:57)
at com.sun.jersey.spi.container.ContainerResponse.write(ContainerResponse.java:302)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1510)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
How to fix this error or is there any other way to update index if neo4j node's property value changes?
As this is one of the very few page that appears when you search Google for this string, I wanted to post a clear (one that J. Dimeo's answer above alludes to, but is far from specific).
In your graylog config (/etc/graylog/server/server.conf for me), set elasticsearch_discovery_enabled to false, and resart the service.
That's it :)
Are you using AWS ElasticSearch? They do not allow connecting to individual nodes. I read elsewhere (from the AWS team):
"Looking over the logs, it seems that 'i.s.c.config.discovery.NodeChecker' is trying to auto discover and connect to the individual nodes of the cluster. Amazon is continuously working hard on improving the service features but unfortunately, at this moment AWS doesn't allow clients to connect to the individual nodes of the cluster. Instead, you can connect using the URL"
You need to turn off node discovery in the Jest client somehow:
ClientConfig clientConfig = new ClientConfig.Builder("http://localhost:9200").discoveryEnabled(false)
See https://github.com/searchbox-io/Jest/blob/master/jest/README.md#node-discovery-through-nodes-api
I have created some index in Elasticsearch with mapper attachment plugin. However, when I try to create index in Kibana, I could not find back any data created in Elasticsearch for making dashboard in Kibana
Is there any way to resolve this issue?
Try running http://:9200/_cat/indices?v
The above will return all indexes you have. Once you verified that your mapper attachment index is there, go to Kibana at Settings tab and select the checkbox that say your index do not contain time series data. Now write your index name and I hope you find it. Also, make sure your Kibana is configured to point to the Elasticsearch server your index resides. This is configured in the config/kibana.yaml.
Hope I have managed to help!
I'm trying to setup a ELK stack on EC2, Ubuntu 14.04 instance. But everything install, and everything is working just fine, except for one thing.
Logstash is not creating an index on Elasticsearch. Whenever I try to access Kibana, it wants me to choose an index, from Elasticsearch.
Logstash is in the ES node, but the index is missing. Here's the message I get:
"Unable to fetch mapping. Do you have indices matching the pattern?"
Am I missing something out? I followed this tutorial: Digital Ocean
EDIT:
Here's the screenshot of the error I'm facing:
Yet another screenshot:
I got identical results on Amazon AMI (Centos/RHEL clone)
In fact exactly as per aboveā¦ Until I injected some data into Elastic - this creates the first day index - then Kibana starts working. My simple .conf is:
input {
stdin {
type => "syslog"
}
}
output {
stdout {codec => rubydebug }
elasticsearch {
host => "localhost"
port => 9200
protocol => http
}
}
then
cat /var/log/messages | logstash -f your.conf
Why stdin you ask? Well it's not super clear anywhere (also a new Logstash user - found this very unclear) that Logstash will never terminate (e.g. when using the file plugin) - it's designed to keep watching.
But using stdin - Logstash will run - send data to Elastic (which creates index) then go away.
If I did the same thing above with the file input plugin, it would never create the index - I don't know why this is.
I finally managed to identify the issue. For some reason, the port 5000 is being accessed by another service, which is not allowing us to accept any incoming connection. So all your have to do is to edit the logstash.conf file, and change the port from 5000 to 5001 or anything of your convenience.
Make sure all of your logstash-forwarders are sending the logs to the new port, and you should be good to go. If you have generated the logstash-forwarder.crt using the FQDN method, then the logstash-forwarder should be pointing to the same FQDN and not an IP.
Is this Kibana3 or 4?
If it's Kibana4, can you click on settings in the top-line menu, choose indices and then make sure that the index name contains 'logstash-*', then click in the 'time-field' name and choose '#timestamp'
I've added a screenshot of my settings below, be careful which options you tick.