Graylog v4.0.7
ElasticSearch v7.7
MongoDb v4.4
We are setting up Graylog in our kubernetes cluster. Graylog can connect to MongoDB server from another cluster using LB. When we connect our Graylog to ElasticSearch which is in different cluster (using LB e.g. https://myelastic.sample.com:443), logs from the app shows 503. But when we curl some elasticsearch api it shows 200.
This only occurs in graylog pod.
Caused by: org.graylog.shaded.elasticsearch7.org.elasticsearch.client.ResponseException: method [GET], host [https://myelastic.sample.com:443], URI [/_alias/graylog_deflector?ignore_throttled=false&ignore_unavailable=false&expand_wildcards=open%2Cclosed&allow_no_indices=true], status line [HTTP/1.1 503 Service Unavailable]
upstream connect error or disconnect/reset before headers. reset reason: connection failure
at org.graylog.shaded.elasticsearch7.org.elasticsearch.client.RestClient.convertResponse(RestClient.java:302) ~[?:?]
at org.graylog.shaded.elasticsearch7.org.elasticsearch.client.RestClient.performRequest(RestClient.java:272) ~[?:?]
at org.graylog.shaded.elasticsearch7.org.elasticsearch.client.RestClient.performRequest(RestClient.java:246) ~[?:?]
Related
I deployed a CockroachDB cluster on a 4 gcp instances in a secure mode and configured a TCP proxy load balancer to distribute the traffic, but when I try to connect to it through the load balancer sometimes I get connected but most of the times I get a connection timeout with this error message in the instances cockroachdb logs:
‹http: TLS handshake error from 130.211.1.145:50475: tls: first record does not look like a TLS handshake›
The 130.211.1.145 address in the error message is the gcp LoadBalancer's IP address.
Any thoughts?
I am trying to refresh an index pattern from kibana and getting 403 error. I have no heap issues/ RAM issues.
{"statusCode":403,"error":"Forbidden","message":"index [.kibana_1] blocked by: [FORBIDDEN/8/index write (api)];: [cluster_block_exception] index [.kibana_1] blocked by: [FORBIDDEN/8/index write (api)];"}
I am trying to push data from logstash to AWS ES and getting below error. Could anyone please me to fix this.
[2019-01-04T08:52:50,757][ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request to elasticsearch' but Elasticsearch appears to be unreachable or down! {:error_message=>"Elasticsearch Unreachable: [http://search-mb-production-app-.us-west-2.es.amazonaws.com:80/][Manticore::SocketTimeout] Read timed out", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError", :will_retry_in_seconds=>64}
[2019-01-04T08:52:53,345][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://search-mb-production-app-**.us-west-2.es.amazonaws.com:80/"}
I have 3 node Nifi cluster running(1.5 version).
I have built the pipeline with handlehttprequest and intermediate set of processor for transformation/cleaning the incoming data followed by handlehttpresponse.
I have provided the http end point to client applications.
There are various client applications which would be posting the data through the rest end point provided by nifi.
I had issue with Handlehttprequest running with execution mode as all nodes. Nifi UI becomes unresponsive after some time and I could see below error message in the nifi-app.log and I could see error message in the processor(handlehttprequest).
Changing the execution to primary node, solves the issue.
So need help with below:
Handlehttprequest can't work on multinode cluster environment?
If so, how to make use of multiple nodes to handle high throughput of incoming data posted by client applications?
how to put load balancer for handlehttprequest endpoint and distribute the incoming data to multiple nodes on the cluster?
error message at processor level:
HandleHttpRequest[id=74ee1128-2de6-3979-a818-2b598186f7aa] HandleHttpRequest[id=74ee1128-2de6-3979-a818-2b598186f7aa] failed to process due to org.apache.nifi.processor.exception.ProcessException: Failed to initialize the server; rolling back session: Failed to initialize the server
==========================
Nifi-app.log
2018-08-03 14:45:22,551 INFO [Timer-Driven Process Thread-6] org.eclipse.jetty.server.Server jetty-9.4.3.v20170317
2018-08-03 14:45:22,551 ERROR [Timer-Driven Process Thread-3] o.a.n.p.standard.HandleHttpRequest HandleHttpRequest[id=74ee1128-2de6-3979-a818-2b598186f7aa] Failed to process session due to org.apache.nifi.processor.exception.ProcessException: Failed to initialize the server: {}
org.apache.nifi.processor.exception.ProcessException: Failed to initialize the server
at org.apache.nifi.processors.standard.HandleHttpRequest.onTrigger(HandleHttpRequest.java:488)
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122)
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
==============================================
Any help in providing the direction is appreciated.
Thanks,
Vish
This error generally means there is an issue with the server binding to the host or port. Since it sounds like the port is available it is probably an issue with the hostname.
In a cluster you would have to leave the hostname property blank in HandleHttpRequest so it will bind to the address of each node, or you could use a dynamic expression like ${hostname}.
For load balancing, it should be the same as any other configuration to put a load balancer in front of a web application. There are some articles that cover it already:
https://pierrevillard.com/2017/02/10/haproxy-load-balancing-in-front-of-apache-nifi/
I am running Kubernetes Cluster and I found out that when I'm trying to access Kibana Dashboard, I'm getting States "RED" screen and I can see that the elastic search service unavailable. I checked the logs on one of the elastic-search pods and I saw following logs,
[2017-09-19 08:54:33,776][WARN ][transport.netty ] [Dominus] exception caught on transport layer [[id: 0xf27712f3]], closing connection
java.net.NoRouteToHostException: No route to host
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.jboss.netty.channel.socket.nio.NioClientBoss.connect(NioClientBoss.java:152)
at org.jboss.netty.channel.socket.nio.NioClientBoss.processSelectedKeys(NioClientBoss.java:105)
at org.jboss.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:79)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at org.jboss.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2017-09-19 08:54:44,832][WARN ][rest.suppressed ] path: /_bulk, params: {}
ClusterBlockException[blocked by: [SERVICE_UNAVAILABLE/2/no master];]
at org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(ClusterBlocks.java:158)
at org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedRaiseException(ClusterBlocks.java:144)
at org.elasticsearch.action.bulk.TransportBulkAction.executeBulk(TransportBulkAction.java:204)
at org.elasticsearch.action.bulk.TransportBulkAction.doExecute(TransportBulkAction.java:151)
at org.elasticsearch.action.bulk.TransportBulkAction.doExecute(TransportBulkAction.java:71)
at org.elasticsearch.action.support.TransportAction.doExecute(TransportAction.java:149)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:137)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:85)
at org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:58)
Okay I would like to post an answer to my own question, so I was getting queue buffer error on elasticsearch node. I assume that it exceeded the heap memory and hence started failing. I restarted Kibana and the master elasticsearch and it worked like a charm. :)