Not able to see newly added log in docker ELK - elasticsearch

I'm using sebp/elk's dockerised ELK. I've managed to get it running on my local machine and I'm trying to input dummy log data by SSH'ing into the docker container and running:
/opt/logstash/bin/logstash --path.data /tmp/logstash/data \
-e 'input { stdin { } } output { elasticsearch { hosts => ["localhost"] } }'
After typing in some random text, I cannot see it indexed by elasticsearch when I visit http://localhost:9200/_search?pretty&size=1000

Related

Trying to set logstash conf file in docker-compose.yml on Mac OS

Here is what I have specified in my yml for the logstash. I've tried multiple variations of quotes, no quotes, etc:
volumes:
- "./logstash:/etc/logstash/conf:ro"
command:
- "logstash -f /etc/logstash/conf/simplels.conf"
And simplels.conf contains this:
input {
stdin{}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
}
stdout{}
}
Overall file structure is this, I'm running docker-compose up from the docker folder and getting Exit 1 on the Logstash container due to my 'command' parameter:
/docker:
docker-compose.yml
/logstash
simplels.conf

How to connect Opentracing application to a remote Jaeger collector

I am using Jaeger UI to display traces from my application. It's work fine for me if both application an Jaeger are running on same server. But I need to run my Jaeger collector on a different server. I tried out with JAEGER_ENDPOINT, JAEGER_AGENT_HOST and JAEGER_AGENT_PORT, but it failed.
I don't know, whether my values setting for these variables is wrong or not. Whether it required any configuration settings inside application code?
Can you provide me any documentation for this problem?
In server 2 , Install jaeger
$ docker run -d --name jaeger \
-p 5775:5775/udp \
-p 6831:6831/udp \
-p 6832:6832/udp \
-p 5778:5778 \
-p 16686:16686 \
-p 14268:14268 \
-p 9411:9411 \
jaegertracing/all-in-one:latest
In server 1, set these environment variables.
JAEGER_SAMPLER_TYPE=probabilistic
JAEGER_SAMPLER_PARAM=1
JAEGER_SAMPLER_MANAGER_HOST_PORT=(EnterServer2HostName):5778
JAEGER_REPORTER_LOG_SPANS=false
JAEGER_AGENT_HOST=(EnterServer2HostName)
JAEGER_AGENT_PORT=6831
JAEGER_REPORTER_FLUSH_INTERVAL=1000
JAEGER_REPORTER_MAX_QUEUE_SIZE=100
application-server-id=server-x
Change the tracer registration application code as below in server 1, so that it will get the configurations from the environment variables.
#Produces
#Singleton
public static io.opentracing.Tracer jaegerTracer() {
String serverInstanceId = System.getProperty("application-server-id");
if(serverInstanceId == null) {
serverInstanceId = System.getenv("application-server-id");
}
return new Configuration("ApplicationName" + (serverInstanceId!=null && !serverInstanceId.isEmpty() ? "-"+serverInstanceId : ""),
Configuration.SamplerConfiguration.fromEnv(),
Configuration.ReporterConfiguration.fromEnv())
.getTracer();
}
Hope this works!
Check this link for integrating elasticsearch as the persistence storage backend so that the traces will not remove once the Jaeger instance is stopped.
How to configure Jaeger with elasticsearch?
Specify "JAEGER_AGENT_HOST" and ensure "local_agent" is not specified in tracer config file.
Below is the working solution for Python
import os
os.environ['JAEGER_AGENT_HOST'] = "123.XXX.YYY.ZZZ" # Specify remote Jaeger-Agent here
# os.environ['JAEGER_AGENT_HOST'] = "16686" # optional, default: "16686"
from jaeger_client import Config
config = Config(
config={
'sampler': {
'type': 'const',
'param': 1,
},
# ENSURE 'local_agent' is not specified
# 'local_agent': {
# # 'reporting_host': "127.0.0.1",
# # 'reporting_port': 16686,
# },
'logging': True,
},
service_name="your-service-name-here",
)
# create tracer object here and voila!
Guidance of Jaeger: https://www.jaegertracing.io/docs/1.33/getting-started/
Jaeger-Client features: https://www.jaegertracing.io/docs/1.33/client-features/
Flask-OpenTracing: https://github.com/opentracing-contrib/python-flask
OpenTelemetry-Python: https://opentelemetry.io/docs/instrumentation/python/getting-started/

logstash configuration to execute a command to elastic search

i am running an ELK stack in 3 docker containers through host machine ubuntu 16.04
the problem is after configuring the logstash.conf file to execute a command like "ifconfig" or "netstat -ano"i get an error. my logstash.conf file is:
input {
exec {
command => "netsat -ano"
codec => "json"
interval => 5
}
}
output{
elasticsearch { hosts => ["elasticsearch:9200"]}
}
i get this error after entering this command ( docker run -h logstash --name logstash --link elasticsearch:elasticsearch -it --rm -v "$PWD":/config-dir logstash -f /config-dir/logstash1.conf)
14:29:30.703 [[main]<exec] ERROR logstash.inputs.exec - Error while running command {:command=>"netsat -ano", :e=>#<IOError: Cannot run program "netsat" (in directory "/"): error=2, No such file or directory>, :backtrace=>["org/jruby/RubyIO.java:4380:in `popen'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-exec-3.1.2/lib/logstash/inputs/exec.rb:76:in `execute'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-exec-3.1.2/lib/logstash/inputs/exec.rb:75:in `execute'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-exec-3.1.2/lib/logstash/inputs/exec.rb:40:in `inner_run'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-exec-3.1.2/lib/logstash/inputs/exec.rb:34:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:443:in `inputworker'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:436:in `start_input'"]}
can anyone help please thanks in advance !
You will need to provide a full path for those commands, as the one Logstash runs with doesn't contain those directories.
input {
exec {
command => "/bin/netsat -ano"
codec => "json"
interval => 5
}
}

executing shell script and using its output as input to next gradle task

I am using gradle for build and release, so my gradle script executes a shell script. The shell script outputs an ip address which has to be provided as an input to my next gradle ssh task. I am able to get the output and print on the console but not able to use this output as an input to next task.
remotes {
web01 {
def ip = exec {
commandLine './returnid.sh'
}
println ip --> i am able to see the ip address on console
role 'webServers'
host = ip --> i tried referring as $ip '$ip' , both results into syntax error
user = 'ubuntu'
password = 'ubuntu'
}
}
task checkWebServers1 << {
ssh.run {
session(remotes.web01) {
execute 'mkdir -p /home/ubuntu/abc3'
}
}
}
but it results in error "
What went wrong:
Execution failed for task ':checkWebServers1'.
java.net.UnknownHostException: {exitValue=0, failure=null}"
Can anyone please help me use the output variable in proper syntax or provide some hints which could help me.
Thanks in advance
The reason it's not working is the fact, that exec call return is ExecResult (here is it's JavaDoc description) and it's not a text output of the execution.
If you need to get the text output, then you've to specify the standardOutput property of the exec task. This could be done so:
remotes {
web01 {
def ip = new ByteArrayOutputStream()
exec {
commandLine './returnid.sh'
standardOutput = ip
}
println ip
role 'webServers'
host = ip.toString().split("\n")[2].trim()
user = 'ubuntu'
password = 'ubuntu'
}
}
Just note, the ip value by default would have a multiline output, include the command itself, so it has to be parsed to get the correct output, For my Win machine, this could be done as:
ip.toString().split("\n")[2].trim()
Here it takes only first line of the output.

How to read /var/log/wtmp logs in elasticsearch

I am trying to read the access log s from /var/log/wtmp in elasticsearch
I can read the file when logged into the box by using last -F /var/log/wtmp
I have logstash running and sending logs to elasticsearch, here is logstash conf file.
input {
file {
path => "/var/log/wtmp"
start_position => "beginning"
}
}
output {
elasticsearch {
host => localhost
protocol => "http"
port => "9200"
}
}
what is showing in elasticsearch is
G
Once i opened the file using less , i could only see binary data.
Now logstash cant understand this data.
A logstash file like the following should work fine -
input {
pipe {
command => "/usr/bin/last -f /var/log/wtmp"
}
}
output {
elasticsearch {
host => localhost
protocol => "http"
port => "9200"
}
}
Vineeth's answer is right but the following cleaner config works as well:
input { pipe { command => "last" } }
last /var/log/wtmp and last are exactly the same.
utmp, wtmp, btmp are Unix files that keep track of user logins and logouts. They cannot be read directly because they are not regular text files. However, there is the last command which displays the information of /var/log/wtmp in plain text.
$ last --help
Usage:
last [options] [<username>...] [<tty>...]
I can read the file when logged into the box by using last -F /var/log/wtmp
I doubt that. What the -F flag does:
-F, --fulltimes print full login and logout times and dates
So, last -F /var/log/wtmp will interpret /var/log/wtmp as a username and won't print any login information.
What the -f flag does:
-f, --file <file> use a specific file instead of /var/log/wtmp

Resources