Logstash input using proxy - proxy

I'm having a little trouble while using Logstash and http_poller as an input plugin. I would like to send my http requests through a proxy, but the only documentation I could find is:
<li> Value type is <<string,string>>
* There is no default value for this setting.
How can I define the IP of my proxy and a specific port to use?

After a few weeks of research, I found the solution:
You just have to specify the host/port within brackets;
proxy => {
host => "IP"
port => "PORT"
}

Related

How to Use Domain name for Elasticsearch Connection in Python

We have deployed Elasticsearch(8.3) using kubernetes, ingress is defined for Elasticseach as https://elasticsearch.url.com/es, but when I am using the same to connect to Elasticsearch using Python elasticsearch package, I am getting error below:
Note: I have tried giving port number(https://elasticsearch.url.com:9200/es/)but still did not worked.
ValueError: URL must include a 'scheme', 'host', and 'port' component (ie 'https://localhost:9200')
I am using below code to connect:
from elasticsearch import Elasticsearch
client = Elasticsearch(
["https://elasticsearch.url.com/es/"],
http_auth=('username', 'password')
)
Kindly help me here how to resolve this.
the clients expect something like https://elasticsearch.url.com:9200/, as anything after the last / is considered a path/action of some sort, eg _search or an index name, for Elasticsearch to then do something with based on that context
you will likely need to remove the trailing es part of the url, then you can use https://elasticsearch.url.com:80/` (assuming ingress port 80 redirects to port 9200 for Elasticsearch)

Is there a way to enable both unix socket and http in consul?

I am currently running consul agent as a service in the VM and it works well with http or unix:/// option, but I have a requirement where I need both http and unix socket has to be enabled... Is it possible? please let me know your thoughts. Thanks!
The addresses key supports specifying a space-separated list of addresses to bind to. You can use the following configuration to have Consul listen on an IP address as well as a Unix socket.
# config.hcl
addresses {
http = "0.0.0.0 unix:///tmp/consul-stackoverflow-example.socket"
}

How to get proxy server ip

Let say you have to setup proxy setting in some app, but you don't know the proxy server IP and/or port. The browser setting says: automatic detection.
And there is no one around to give you the answer.
How to obtain the proxy server ip address
Go to cmd or powershell
run netstat
you will see a lot and a lot more connections.
The output shows columns like below:
'Protocol' 'Local Address' 'Foreign Address' 'State'
Foreign Address will repeat the same value many, many times. This is your proxy server for 99%. if there is only name simply ping the name to get the ip address.
4ex:
proxy:8080
ping proxy
proxy.mynetwork.local
10.0.0.250
setup the proxy in your app to
proxy server:
proxy.mytwork.local (or 10.0.0.250)
proxy port: 8080
try this list of servers here:
https://www.us-proxy.org/
or here:
https://whatismyipaddress.com/google-search?q=proxy+server+list&sa=Proxy+Server+List+Search&cx=013731333855297778374%3Absyy_h6slhu&cof=FORID%3A9&ie=ISO-8859-1

Filebeat - Multiple server instance configuration

I did configure the Elastic Stack (Logstash + Elastic search + Kibana ) with filebeat. So my question is I have multiple servers where I deployed my application instances (Microservices applications ). I want to capture logs from all the servers but for that I have to install filebeat in each server. Is it the correct understanding ? or Can we configure something like that single filebeat instance able to fetch logs from all the servers (Servers can be same network) and send logs over TCP or any protocol ?
Yes you will have to deploy filebeat on all the servers from where you wish to scrap the logs.
Another option is to configure your logstash to listen on a TCP port and then configure your applications to log to a socket instead of a file.
input {
tcp {
port => 8192
codec => json
tags => [ "micrologs" ]
}
}
This sets up a listener on the Logstash box on port 8192. Logs arrive one at a time, with a connection each time, formatted in JSON.
input {
tcp {
port => 8192
codec => json_lines
tags => [ "micrologs" ]
}
}
This does the same, except the connection is persistent, and the json_lines codec is used to break up log-events based on the lines of JSON in the incoming connection.
You don't have to use json here, it can be plain text if you need it. I used JSON as an example of structured log.

Logstash unable to send output to elastic search

This question has already been asked here https://stackoverflow.com/questions/30283837/logstash-unable-to-send-output-to-elasticsearch but I can't see the actual answer. Can someone advise if there is a solution to this please as I am suffering the same problem.
I believe this is a system parameter causing the problem as I've run the same config on a different machine and it works fine. Anyone know what this might be?
I've looked everywhere to find an answer but can only find a firewall issue (so tried outside the firewall) and the time stamp being different between the twitter server and my machine (machine timing set correctly). Can anyone advise what's causing the below authorisation error (I've also checked the twitter app and settings are correct and working).
Logstash startup completed
←[33m {:exception=>Twitter::Error::Unauthorized, :backtrace=>["C:/logstash-1.5.1
/vendor/bundle/jruby/1.9/gems/twitter-5.12.0/lib/twitter/streaming/response.rb:2
1:in on_headers_complete'", "org/ruby_http_parser/RubyHttpParser.java:370:in<
<'", "C:/logstash-1.5.1/vendor/bundle/jruby/1.9/gems/twitter-5.12.0/lib/twitter/
streaming/response.rb:16:in <<'", "C:/logstash-1.5.1/vendor/bundle/jruby/1.9/ge
ms/twitter-5.12.0/lib/twitter/streaming/connection.rb:22:instream'", "C:/logst
ash-1.5.1/vendor/bundle/jruby/1.9/gems/twitter-5.12.0/lib/twitter/streaming/clie
nt.rb:116:in request'", "C:/logstash-1.5.1/vendor/bundle/jruby/1.9/gems/twitter
-5.12.0/lib/twitter/streaming/client.rb:36:infilter'", "C:/logstash-1.5.1/vend
or/bundle/jruby/1.9/gems/logstash-input-twitter-0.1.6/lib/logstash/inputs/twitte
r.rb:88:in run'", "C:/logstash-1.5.1/vendor/bundle/jruby/1.9/gems/logstash-core
-1.5.1-java/lib/logstash/pipeline.rb:176:ininputworker'", "C:/logstash-1.5.1/v
endor/bundle/jruby/1.9/gems/logstash-core-1.5.1-java/lib/logstash/pipeline.rb:17
0:in `start_input'"], :level=>:warn}←[0m
I'm using the below twitter config:-
input {
twitter {
consumer_key => "xxxxxxxxxxxxxxxxxx"
consumer_secret => "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
oauth_token => "xxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
oauth_token_secret => "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
keywords => ["xxxxxxxxxxx"]
full_tweet => true
}
}
output {
stdout { codec => dots }
elasticsearch {
protocol => "http"
host => "localhost"
index => "twitter"
document_type => "tweet"
template => "twitter_template.json"
template_name => "twitter"
}
}
I tried the same from outside the firewall and instead of authorisation issue I get a connection refused. So I believe warkolm you were correct but that's just left me with a different problem.
←[33mConnection refused - Connection refused {:exception=># Connection refused - Connection refused>, :backtrace=>["org/jruby/ext/socket/Ru
byTCPSocket.java:126:in initialize'", "org/jruby/RubyIO.java:853:innew'", "C:
/logstash-1.5.1/vendor/bundle/jruby/1.9/gems/twitter-5.12.0/lib/twitter/streamin
g/connection.rb:16:in stream'", "C:/logstash-1.5.1/vendor/bundle/jruby/1.9/gems
/twitter-5.12.0/lib/twitter/streaming/client.rb:116:inrequest'", "C:/logstash-
1.5.1/vendor/bundle/jruby/1.9/gems/twitter-5.12.0/lib/twitter/streaming/client.r
b:36:in filter'", "C:/logstash-1.5.1/vendor/bundle/jruby/1.9/gems/logstash-inpu
t-twitter-0.1.6/lib/logstash/inputs/twitter.rb:88:inrun'", "C:/logstash-1.5.1/
vendor/bundle/jruby/1.9/gems/logstash-core-1.5.1-java/lib/logstash/pipeline.rb:1
76:in inputworker'", "C:/logstash-1.5.1/vendor/bundle/jruby/1.9/gems/logstash-c
ore-1.5.1-java/lib/logstash/pipeline.rb:170:instart_input'"], :level=>:warn}←[
0m
This is even after trying a new set of twitter authorization codes.
If this same configuration is run on another developers machine it works fine so there is a config on my desktop that isn't set correctly (both machines are running the same setup windows 8.1 running ES 1.7.1 and LS 1.5.1).
Any thoughts on what setting is missing on my machine.
Thanks
Leigh
The stack trace that you provided originates in the Twitter connection phase -- its's getting connect refused when trying to initiate a connection to stream.twitter.com:443. Since that's unlikely that the streaming api for twitter was down, you've probably got a firewall blocking you in some way.
Looking up stream.twitter.com, there are multiple A records for it... So it's possible that whomever runs your firewall missed an address or something like that. It could be that the other developer has a host file entry to force it to a specific address.
If you look at something like this it will show you what addresses a given host is associated with.
You can then try telneting to port 443 on each address to see if you get a connection or a connection refused on each one.

Resources