I'm having trouble connecting to an Elasticsearch instance with a Telegraf output plugin.
I created an Elasticsearch setup via the Elasticsearch service. I created a user and password (connected to a role) in Kibana for it.
Then I setup a Telegraf output for it:
[[outputs.elasticsearch]]
urls = [ "https://hostname:port" ] # required.
timeout = "5s"
enable_sniffer = false
health_check_interval = "10s"
## HTTP basic authentication details.
username = "my_username"
password = "my_password"
index_name = "device_logs" # required.
insecure_skip_verify = true
manage_template = true
template_name = "telegraf"
overwrite_template = false
But when I try to start Telegraf with this, it just gives the error,
[agent] Failed to connect to [outputs.elasticsearch], retrying in 15s, error was 'health check timeout: no Elasticsearch node available'
The connect fail seems to originate deep in the bowels of golang's net/http library, and I don't know how to get some more useful output at this point.
Things I've tried:
Thing #1: I tested cURL:
curl -u my_username:my_password -X POST "https://hostname:port/device_logs/_doc" -H 'Content-Type: application/json' -d'
{
"name": "John Doe"
}'
This works fine.
Thing #2: I created a simple Go program to connect to elasticsearch from Go:
package main
import (
"log"
"time"
"gopkg.in/olivere/elastic.v3"
)
func main() {
// configure connection to ES
client, err := elastic.NewClient(elastic.SetURL("https://hostname:port"))
if err != nil {
panic(err)
}
log.Printf("client.running? %v",client.IsRunning())
if ! client.IsRunning() {
panic("Could not make connection, not running")
}
}
.. and it hits the first panic with the same "no Elasticsearch node available".
Thing #3: I tried running gdb on that Go program to debug into it.
It jumps down to assembly as soon as I call NewClient, so I can't really learn what is happening in the bowels of net/http.
I've never used Go before, so I'm hoping to avoid hours of learning Go, spelunking, and debugging to get around what hopefully is a simple issue here.
Any ideas on how to get more info here or why this is failing? Are there build or runtime flags for Go that I can use? gdb-with-Go debugging tips so I can step down into the Go library code? Elasticsearch client know-how?
To answer my own question, the problem here turned out to be the roles permissions. The Telegraf output plugin for Elasticsearch needs both the monitor and the manage_index_templates permissions to be enabled, or else it'll fail to connect to the Elasticsearch server without printing any information about why.
BTW: to build golang code and be able to debug into the libraries it calls:
go build -gcflags=all="-N -l"
Related
I am trying to create Golang web-pages...
Progress:
Ubuntu 18.04 installed both locally and on a Linode VPS.
Created and compiled a local Golang "Hello World" script that renders OK both locally and online.
Created a net/http Golang script that works OK when called locally http://localhost:8080/testing to see if it works
Uploaded the script to the Linode server and initial status messages appear but when calling http:123.456.789.32:8080/testing to see if it works the browser freezes.
//
// Golang - main.go
//
package main
import (
"net/http"
)
func sayHello(w http.ResponseWriter, r *http.Request) {
message := r.URL.Path
message = "Hello " + message
w.Write([]byte(message))
}
func main() {
http.HandleFunc("/", sayHello)
if err := http.ListenAndServe(":8080", nil); err != nil {
panic(err)
}
}
There are no errors or warnings rendered and unable to find any log references.
Can error and warnings similar to PHP error_reporting(-1), declare(strict_types=1) etc be logged or rendered?
A quick check with Nmap showed this result:
nmap -sV -p 8080 <yourIP>
Starting Nmap 7.70 ( https://nmap.org ) at 2019-07-04 07:45 CEST
Nmap scan report for <your-domain>.com (<yourIP>)
Host is up (0.032s latency).
PORT STATE SERVICE VERSION
8080/tcp filtered http-proxy
Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 0.90 seconds
The state of "filtered" actually means that there was no response on that port as opposed to an outright rejection of the request.
Check the output of iptables -L -n. Presumably, you have a firewall running and blocking port 8080. Do not simply deactivate the firewall, but read up on how to open port 8080 in the firewall product you are using. Linode has guides for the commonly used/preinstalled firewalls of various Linux distributions.
If you plan to go into production, please have someone help you to ensure security and availability of your deployment.
I have a Go program that uses aws-sdk-go to talk to dynamodb. Dependencies are vendored. Go version 1.7.1. aws-sdk-go version 1.6.24. The program works as expected in all the following environments:
dev box from shell (Arch Linux)
docker container running on my dev box (Docker 1.13.1)
Ec2 instance from shell (Ubuntu 16.04)
When I run the docker container on kubernetes (same one I tested on my dev box), I get the following error:
2017/03/02 22:30:13 DEBUG ERROR: Request dynamodb/GetItem:
---[ REQUEST DUMP ERROR ]-----------------------------
net/http: invalid header field value "AWS4-HMAC-SHA256 Credential=hidden\n/20170302/us-east-1/dynamodb/aws4_request, SignedHeaders=accept-encoding;content-length;content-type;host;x-amz-date;x-amz-target, Signature=483f56dd0b17d8945d3c2f2044b7f97e531190602f132a4d5f828264b3a2cff2" for key Authorization
-----------------------------------------------------
2017/03/02 22:30:13 DEBUG: Response dynamodb/GetItem Details:
---[ RESPONSE ]--------------------------------------
HTTP/0.0 000 status code 0
Content-Length: 0
Based on:
https://golang.org/src/net/http/transport.go
https://godoc.org/golang.org/x/net/lex/httplex#ValidHeaderFieldValue
It looks like the problem is with the header value validation, yet I am at a loss to understand why it works everywhere except on my k8s cluster. The cluster is composed of Ec2 instances running the latest CoreOS stable ami (CoreOS stable 1235.8.0)
The docker image that works on my dev machine is scratch based. To troubleshoot I created an image based on Ubuntu latest with a separate go program that just does a simple get item from dynamodb. When this image is run on my k8s cluster and the program run from an interactive shell, I get the same errors. I have confirmed I can ping the dynamodb endpoints from this env.
I am having a hard time troubleshooting this issue: am I missing something stupid here? Can someone point me in the right direction or have an idea of what is going on?
remember the "-n" when you do this:
echo -n key | base64
The \n after hidden is certainly invalid. Not sure if it is actually there or somehow got inserted when you were cleansing for posting.
Consider:
package main
import (
"fmt"
"golang.org/x/net/lex/httplex"
)
func main() {
fmt.Println("Is valid (without new line)", httplex.ValidHeaderFieldValue("AWS4-HMAC-SHA256 Credential=hidden/20170302/us-east-1/dynamodb/aws4_request, SignedHeaders=accept-encoding;content-length;content-type;host;x-amz-date;x-amz-target, Signature=483f56dd0b17d8945d3c2f2044b7f97e531190602f132a4d5f828264b3a2cff2"))
fmt.Println("Is valid (with new line)", httplex.ValidHeaderFieldValue("AWS4-HMAC-SHA256 Credential=hidden\n/20170302/us-east-1/dynamodb/aws4_request, SignedHeaders=accept-encoding;content-length;content-type;host;x-amz-date;x-amz-target, Signature=483f56dd0b17d8945d3c2f2044b7f97e531190602f132a4d5f828264b3a2cff2"))
}
One guess would be wherever the real hidden value is getting pulled from (config file etc) mistakenly has the \n in there and it's happily getting pulled into your header, but only in this case.
I have a elasticsearch docker image listening on 127.0.0.1:9200, I tested it using sense and kibana, It works fine, I am able to index and query documents. Now when I try to write to it from a spark App
val sparkConf = new SparkConf().setAppName("ES").setMaster("local")
sparkConf.set("es.index.auto.create", "true")
sparkConf.set("es.nodes", "127.0.0.1")
sparkConf.set("es.port", "9200")
sparkConf.set("es.resource", "spark/docs")
val sc = new SparkContext(sparkConf)
val sqlContext = new SQLContext(sc)
val numbers = Map("one" -> 1, "two" -> 2, "three" -> 3)
val airports = Map("arrival" -> "Otopeni", "SFO" -> "San Fran")
val rdd = sc.parallelize(Seq(numbers, airports))
rdd.saveToEs("spark/docs")
It fails to connect, and keeps on retrying
16/07/11 17:20:07 INFO HttpMethodDirector: I/O exception (java.net.ConnectException) caught when processing request: Operation timed out
16/07/11 17:20:07 INFO HttpMethodDirector: Retrying request
I tried using IPAddress given by docker inspect for the elasticsearch image, that also does not work. However when I use a native installation of elasticsearch, the Spark App runs fine. Any ideas?
Also, set the config
es.nodes.wan.only to true
As mentioned in this answer if you are having issues writing to ES.
Couple things I would check:
The Elasticsearch-Hadoop spark connector version you are working with. Make sure that it is not beta. There was a fixed bug related to the IP resolving.
Since 9200 is the default port, you may remove this line: sparkConf.set("es.port", "9200") and check.
Check that there is no proxy configured in your Spark environment or config files.
I assume that you run Elasticsaerch and Spark on the same machine. Can you try to configure your machine IP address instead of 127.0.0.1
Hope this helps! :)
Had the same problem and a further issue was that the confs set using sparkConf.set() didn't have an effect. But supplying the confs with the saving function worked, like this:
rdd.saveToEs("spark/docs", Map("es.nodes" -> "127.0.0.1", "es.nodes.wan.only" -> "true"))
I have recently installed the ELK stack on a Windows server (following this: https://community.ulyaoth.net/threads/how-to-install-logstash-on-a-windows-server-with-kibana-in-iis.17/)
I can get the IIS logs from the server into Logstash and into Elasticsearch, but I can't get the same logs from another server.
Here is my logstash config file from my second server;
input {
file {
type => "IISLog"
path => "C:/inetpub/logs/LogFiles/W3SVC*/*.log"
}
}
filter {
mutate {
add_field => [ "hostip", "%{host}" ]
}
dns {
reverse => [ "host" ]
action => replace
}
}
output {
elasticsearch {
host => "ELK01v"
port => "9301"
}
}
but there is nothing showing in Kibana
In the stderr.log for Logstash I can see the following;
Exception in thread ">output" org.elasticsearch.discovery.MasterNotDiscoveredException: waited for [30s]
at org.elasticsearch.action.support.master.TransportMasterNodeOperationAction$3.onTimeout(org/elasticsearch/action/support/master/TransportMasterNodeOperationAction.java:180)
at org.elasticsearch.cluster.service.InternalClusterService$NotifyTimeout.run(org/elasticsearch/cluster/service/InternalClusterService.java:492)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java/util/concurrent/ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java/util/concurrent/ThreadPoolExecutor.java:615)
at java.lang.Thread.run(java/lang/Thread.java:745)
and this from the stdout.log;
{:timestamp=>"2014-08-22T15:04:55.775000+0100", :message=>"Using milestone 2 input plugin 'file'. This plugin should be stable, but if you see strange behavior, please let us know! For more information on plugin milestones, see http://logstash.net/docs/1.4.2/plugin-milestones", :level=>:warn}
{:timestamp=>"2014-08-22T15:04:55.853000+0100", :message=>"Using milestone 2 filter plugin 'dns'. This plugin should be stable, but if you see strange behavior, please let us know! For more information on plugin milestones, see http://logstash.net/docs/1.4.2/plugin-milestones", :level=>:warn}
log4j, [2014-08-22T15:05:34.215] WARN: org.elasticsearch.discovery: [logstash-WEB01v-3460-4038] waited for 30s and no initial state was set by the discovery
log4j, [2014-08-22T15:09:06.334] WARN: org.elasticsearch.transport: [logstash-WEB01v-3460-4038] Transport response handler not found of id [240]
I've confirmed that I can telnet to ELK01v on port 9301, but I can't think what else could be causing these errors. Is there anyone with ELK knowledge that could help at all?
Thanks
This is an indication that it's trying to join your cluster but wasn't able to for some reason (for example a firewall -- there is communication in both directions when it joins the cluster). The easist solution is to add protocol => http to your elasticsearch ouput. This will work since you've already verified the firewall is open in that direction.
I am following this tutorial to learn Elasticsearch, running it on EC2:
http://exploringelasticsearch.com/book/an-overview/up-and-running.html
So I am able to run the server and, using the query tool provided (http://elastichammer.exploringelasticsearch.com/) I can get the status response. However, when I try creating an index with a PUT, I get back the error:
{ "error": "IndexCreationException[[planet] failed to create index];
nested: NoClassDefFoundError[Could not initialize class
org.elasticsearch.index.codec.postingsformat.PostingFormats]; ",
"status": 500 }
The same thing happens when I use cURL...I can get the sample response with
curl -XGET http://ec2-54-218-36-27.us-west-2.compute.amazonawscom:9200/
as well as the status with:
curl -XGET http://ec2-54-218-36-27.us-west-2.compute.amazonawscom:9200/_status
Has anybody experienced this?
Thank you!