Push log file to elastic using logstash - elasticsearch

We have a error log file from Jmeter , that we need to be pushing to elastic using logstash
Here is the log file , we wanna be pushing the data from line 5 ,
2021-05-14 20:32:49,822 INFO o.a.j.u.JMeterUtils: Setting Locale to en_EN
2021-05-14 20:32:49,842 INFO o.a.j.JMeter: Loading user properties from: user.properties
2021-05-14 20:32:49,843 INFO o.a.j.JMeter: Loading system properties from: system.properties
2021-05-14 20:32:49,844 WARN o.a.j.JMeter: LogLevel: ERROR
2021-05-14 20:32:51,824 ERROR o.a.j.e.J.JSR223 PostProcessor: Http URL/API Test 1-4: ; response received: {"timestamp":"2021-05-14T14:32:51.688+0000","status":404,"error":"Not Found","message":"No message available","path":"/healths"}
2021-05-14 20:32:51,824 ERROR o.a.j.e.J.JSR223 PostProcessor: Http URL/API Test 1-7: ; response received: {"timestamp":"2021-05-14T14:32:51.689+0000","status":404,"error":"Not Found","message":"No message available","path":"/healths"}
2021-05-14 20:32:51,824 ERROR o.a.j.e.J.JSR223 PostProcessor: Http URL/API Test 1-20: ; response received: {"timestamp":"2021-05-14T14:32:51.850+0000","status":404,"error":"Not Found","message":"No message available","path":"/healths"}
2021-05-14 20:32:51,824 ERROR o.a.j.e.J.JSR223 PostProcessor: LEX 2-1: ; response received: <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"><SOAP-ENV:Header><ns3:systemContext xmlns:ns3="urn:abc.mon.utils.v1.context"><systemId>TREX</systemId><correlationId>hgfhjghj-c62dsfsdfdb-4aff-a871-8f32088943b3</correlationId></ns3:systemContext></SOAP-ENV:Header><SOAP-ENV:Body><ns3:retrieveStopPayResponse xmlns:ns3="urn:abc.tailhead.readyforcheck.processStopPay"><retrieveGagaPayResponse recordFound="false"><checkNumber/><stopPayDate/></retrieveGagaPayResponse></ns3:retrieveStopPayResponse></SOAP-ENV:Body></SOAP-ENV:Envelope>
2021-05-14 20:32:51,824 ERROR o.a.j.e.J.JSR223 PostProcessor: Http URL/API Test 1-5: ; response received: {"timestamp":"2021-05-14T14:32:51.687+0000","status":404,"error":"Not Found","message":"No message available","path":"/healths"}
We want to be pushing data from line 4 specifically PostProcessor: , response received: to elastic. The below conf is just for response received:
here is the logstash conf
if [type] == "errors"{​​​​​
grok {​​​​​
match => {​​​​​"message" => "%{​​​​​GREEDYDATA}​​​​​#*response received: %{​​​​​GREEDYDATA:error_message}​​​​​"}​​​​​
}​​​​​
}​​​​​
We get an error Grok parse failure _grokparsefailure when we try to push
Not sure what am I missing here

Something like this should work for you:
%{TIMESTAMP_ISO8601:timestamp} %{WORD:LogType} %{GREEDYDATA:data1} PostProcessor: %{GREEDYDATA:data2} ; response received: %{GREEDYDATA:responseRecd}
Example 1:
2021-05-14 20:32:51,824 ERROR o.a.j.e.J.JSR223 PostProcessor: Http URL/API Test 1-4: ; response received: {"timestamp":"2021-05-14T14:32:51.688+0000","status":404,"error":"Not Found","message":"No message available","path":"/healths"}
Output :
{
"timestamp": [
[
"2021-05-14 20:32:51,824"
]
],
"LogType": [
[
"ERROR"
]
],
"data1": [
[
"o.a.j.e.J.JSR223"
]
],
"data2": [
[
"Http URL/API Test 1-4:"
]
],
"responseRecd": [
[
"{"timestamp":"2021-05-14T14:32:51.688+0000","status":404,"error":"Not Found","message":"No message available","path":"/healths"}"
]
]
}
Example 2 :
2021-05-14 20:32:51,824 ERROR o.a.j.e.J.JSR223 PostProcessor: LEX 2-1: ; response received: <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"><SOAP-ENV:Header><ns3:systemContext xmlns:ns3="urn:abc.mon.utils.v1.context"><systemId>TREX</systemId><correlationId>hgfhjghj-c62dsfsdfdb-4aff-a871-8f32088943b3</correlationId></ns3:systemContext></SOAP-ENV:Header><SOAP-ENV:Body><ns3:retrieveStopPayResponse xmlns:ns3="urn:abc.tailhead.readyforcheck.processStopPay"><retrieveGagaPayResponse recordFound="false"><checkNumber/><stopPayDate/></retrieveGagaPayResponse></ns3:retrieveStopPayResponse></SOAP-ENV:Body></SOAP-ENV:Envelope>
Output :
{
"timestamp": [
[
"2021-05-14 20:32:51,824"
]
],
"LogType": [
[
"ERROR"
]
],
"data1": [
[
"o.a.j.e.J.JSR223"
]
],
"data2": [
[
"LEX 2-1:"
]
],
"responseRecd": [
[
"<SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"><SOAP-ENV:Header><ns3:systemContext xmlns:ns3="urn:abc.mon.utils.v1.context"><systemId>TREX</systemId><correlationId>hgfhjghj-c62dsfsdfdb-4aff-a871-8f32088943b3</correlationId></ns3:systemContext></SOAP-ENV:Header><SOAP-ENV:Body><ns3:retrieveStopPayResponse xmlns:ns3="urn:abc.tailhead.readyforcheck.processStopPay"><retrieveGagaPayResponse recordFound="false"><checkNumber/><stopPayDate/></retrieveGagaPayResponse></ns3:retrieveStopPayResponse></SOAP-ENV:Body></SOAP-ENV:Envelope>"
]
]
}
I have given dummy names as data1 and data2. You can separate them out further as per your needs.
Please use Grok Debugger for debugging.

^%{GREEDYDATA}ERROR%{GREEDYDATA}response received: %{GREEDYDATA:error_message}

Related

Retrieve all files that match a filter once

I'm trying to get the file count with my filter from my streaming inbound ftp apdater, so after i process all files, I want to launch a remote shell, or is there any other way to know that the adapter finished sending messages?
I tried already with CompositeFileListFilter overriding the public List filterFiles(F[] files) method, but it never gets called.
for now I'm using a fixed file count, but it should be dynamic.
I made an override of this method on the CompositeFileListFilter
#Override
public List<F> filterFiles(F[] files) {
log.info("received {} files", files.length);
return super.filterFiles(files);
}
I have the following integration flow, using an atomic counter until 3, it should be 3.:
AtomicInteger messageCounter = new AtomicInteger(0);
return IntegrationFlows.from(Ftp.inboundStreamingAdapter(goldv5template())
.remoteDirectory("/inputFolder")
.filter(new CompositeFileListFilterWithCount<>() {{
addFilter(new FtpSimplePatternFileListFilter("pattern1.*"));
addFilter(new FtpSimplePatternFileListFilter("pattern2.*"));
addFilter(new FtpSimplePatternFileListFilter("pattern3.*"));
}})
, pollerConfiguration)
.transform(Transformers.fromStream(StandardCharsets.UTF_8.toString()))
.log(message -> "process file " + message.getHeaders().get(FileHeaders.REMOTE_FILE))
.handle(message -> {
int numericValue = messageCounter.incrementAndGet();
log.info("numeric value: {}", numericValue);
if (numericValue == 3) {
messageCounter.set(0);
log.info("launch remote shell here now"));
}
}, e -> e.advice(after()))
.get();
if I don't use the counter, I would get a remote shell call for every file and I only need it to be called once, only when the flow finished, it's scheduled based on a cronjob, so I want to call it only one time at the end.
I'm using 1s fixed delay for test, but it would only run three times a day, I have to fetch all times at every clock.
this is my pollerConfiguration for test:
sourcePollingChannelAdapterSpec -> sourcePollingChannelAdapterSpec.poller(pollerFactory -> pollerFactory.fixedRate(1000L))
UPDATE
I tried what was suggested by Artem, but I'm having a weird behavior, I'm trying to fetch all files in a certain ftp folder in one poll, so reading the docs:
if the max-messages-per-poll is set to 1 (the default), it processes only one file at a time with intervals as defined by your trigger, essentially working as “one-poll === one-file”.
For typical file-transfer use cases, you most likely want the opposite behavior: to process all the files you can for each poll and only then wait for the next poll. If that is the case, set max-messages-per-poll to -1. Then, on each poll, the adapter tries to generate as many messages as it possibly can...
so i have set max-message-per-poll to -1 so every poll gives me every file.
I added a Filter to only take .xml files and to prevent duplicates, an acceptOnceFilter, but the ftp streaming adapter is giving me unlimited times the same files which doesn't make sense, I used for this test a FixedDelay of 10s.
2019-07-23 10:32:04.308 INFO 9008 --- [ scheduling-1] o.s.integration.handler.LoggingHandler : process2 file sample1.xml
2019-07-23 10:32:04.312 INFO 9008 --- [ scheduling-1] o.s.integration.ftp.session.FtpSession : File has been successfully transferred to: /output/sample1.xml
2019-07-23 10:32:04.313 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : Advice after handle.
2019-07-23 10:32:04.313 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : ________________________________
2019-07-23 10:32:04.315 INFO 9008 --- [ scheduling-1] o.s.integration.handler.LoggingHandler : process file sample2.xml
2019-07-23 10:32:04.324 INFO 9008 --- [ scheduling-1] o.s.integration.ftp.session.FtpSession : File has been successfully transferred to: /output/sample2.xml
2019-07-23 10:32:04.324 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : Advice after handle.
2019-07-23 10:32:04.324 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : ________________________________
2019-07-23 10:32:04.326 INFO 9008 --- [ scheduling-1] o.s.integration.handler.LoggingHandler : process file sample3.xml
2019-07-23 10:32:04.330 INFO 9008 --- [ scheduling-1] o.s.integration.ftp.session.FtpSession : File has been successfully transferred to: /output/sample3.xml
2019-07-23 10:32:04.331 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : Advice after handle.
2019-07-23 10:32:04.331 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : ________________________________
2019-07-23 10:32:04.333 INFO 9008 --- [ scheduling-1] o.s.integration.handler.LoggingHandler : process file sample4.xml
2019-07-23 10:32:04.337 INFO 9008 --- [ scheduling-1] o.s.integration.ftp.session.FtpSession : File has been successfully transferred to: /output/sample4.xml
2019-07-23 10:32:04.338 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : Advice after handle.
2019-07-23 10:32:04.338 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : ________________________________
2019-07-23 10:32:04.341 INFO 9008 --- [ scheduling-1] o.s.integration.handler.LoggingHandler : process file sample1.xml
2019-07-23 10:32:04.345 INFO 9008 --- [ scheduling-1] o.s.integration.ftp.session.FtpSession : File has been successfully transferred to: /output/sample1.xml
2019-07-23 10:32:04.346 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : Advice after handle.
2019-07-23 10:32:04.346 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : ________________________________
2019-07-23 10:32:04.347 INFO 9008 --- [ scheduling-1] o.s.integration.handler.LoggingHandler : process file sample2.xml
2019-07-23 10:32:04.351 INFO 9008 --- [ scheduling-1] o.s.integration.ftp.session.FtpSession : File has been successfully transferred to: /output/sample2.xml
2019-07-23 10:32:04.351 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : Advice after handle.
2019-07-23 10:32:04.351 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : ________________________________
2019-07-23 10:32:04.353 INFO 9008 --- [ scheduling-1] o.s.integration.handler.LoggingHandler : process file sample3.xml
2019-07-23 10:32:04.356 INFO 9008 --- [ scheduling-1] o.s.integration.ftp.session.FtpSession : File has been successfully transferred to: /output/sample3.xml
2019-07-23 10:32:04.356 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : Advice after handle.
2019-07-23 10:32:04.357 INFO 9008 --- [ scheduling-1] i.d.e.v.job.factory.TestFlowFactory : ________________________________
2019-07-23 10:32:04.358 INFO 9008 --- [ scheduling-1] o.s.integration.handler.LoggingHandler : process file sample4.xml
...............................
return IntegrationFlows
.from(Ftp.inboundStreamingAdapter(testFlowTemplate())
.remoteDirectory("/inputTestFlow")
.filter(new CompositeFileListFilter<>() {{
addFilter(new AcceptOnceFileListFilter<>());
addFilter(new FtpSimplePatternFileListFilter("*.xml"));
}})
, sourcePollingChannelAdapterSpec -> sourcePollingChannelAdapterSpec.poller(pollerConfiguration.maxMessagesPerPoll(-1)))
.transform(Transformers.fromStream(StandardCharsets.UTF_8.toString()))
.log(message -> {
execution.setStartDate(new Date());
return "process file " + message.getHeaders().get(FileHeaders.REMOTE_FILE);
})
.handle(Ftp.outboundAdapter(FTPServers.PC_LOCAL.getFactory(), FileExistsMode.REPLACE)
.useTemporaryFileName(false)
.fileNameExpression("headers['" + FileHeaders.REMOTE_FILE + "']")
.remoteDirectory("/output/")
, e -> e.advice(testFlowAfter())
)
.get();
Update 2
I achieved what I needed creating this custom filter:
.filter(new FileListFilter<>() {
private final Set<String> seenSet = new HashSet<>();
private Date lastExecution;
#Override
public List<FTPFile> filterFiles(FTPFile[] files) {
return Arrays.stream(files).filter(ftpFile -> {
if (lastExecution!= null && TimeUnit.MILLISECONDS.toSeconds(new Date().getTime() - lastExecution.getTime()) >= 10L) {
this.seenSet.clear();
}
lastExecution = new Date();
if (ftpFile.getName().endsWith(".xml")) {
return this.seenSet.add(ftpFile.getRawListing());
}
return false;
}).collect(Collectors.toList());
}
})
but I used a handmade 10 Seconds Interval which is okay for my need, any other smart way to make this code better depending on the trigger ?
I think cron trigger is not a right solution here since you really would like to have a single process for all the fetched files.
I think your logic in the filterFiles() is wrong. You really would like to set a counter to the number of files it is going to process, but not the original amount:
#Override
public List<F> filterFiles(F[] files) {
List<F> filteredFiles = super.filterFiles(files);
log.info("received {} files", filteredFiles.size());
return filteredFiles;
}
and here you indeed can set a value into that messageCounter.
UPDATE
There is this functionality on filter:
/**
* Indicates that this filter supports filtering a single file.
* Filters that return true <b>must</b> override {#link #accept(Object)}.
* Default false.
* #return true to allow external calls to {#link #accept(Object)}.
* #since 5.2
* #see #accept(Object)
*/
default boolean supportsSingleFileFiltering() {
return false;
}
I think when you override it to an explicit false in your CompositeFileListFilterWithCount, you should be good. Otherwise you are indeed right: only a plain accept() is called for each file by default. Just because all your FtpSimplePatternFileListFilter comes with true by default and all of them are contribution to true on the FtpSimplePatternFileListFilter level.
Nevertheless all of that says to us that you are using already Spring Integration 5.2 :-)...
UPDATE 2
Try ChainFileListFilter isntead. Place an AcceptOnceFileListFilter in the end of the chain. Although it might be better to use a FtpPersistentAcceptOnceFileListFilter instead: it takes into account a lastmodified for the file. Also consider to include into chain some LastModifiedFileListFilter variant for the FTPFile. Something similar you have in your custom one, but as a separate filter.
Not sure, though, what you mean about making it based on trigger. There is just no any relationship between filter and trigger. You may, of course, have some common interval property and adjust it into the last modified filter value.
By the way: this your story went far away from the original at once request. An Inbound Channel Adapter is really about one file per message, so you definitely can't have a list of file in one message, like it is possible with the FtpOutboundGateway and its LS or MGET commands as I mentioned in comments below.
Regarding "how can I achieve wether all files in one message or all the messages together?" you can try property "max-messages-per-poll". It means:
"The maximum number of messages that will be produced for each poll. Defaults to
infinity (indicated by -1) for polling consumers, and 1 for polled inbound channel adapters.

Error while running .navigateTo() protocol action: timeout

I updated the Nightwatch version to 1.0.19.
While Executing the script I have the following timeout issue in navigate() function.
here is the console output:
{ sessionId: '2aeb3cd9367d3580ae4ae705abca2a80',
status: 21,
value:
{ message: 'timeout',
error:
[ ' (Session info: chrome=63.0.3239.132)',
' (Driver info: chromedriver=2.35.528139 (47ead77cb35ad2a9a83248b292151462a66cd881),platform=Linux 4.4.0-1038-aws x86_64)' ] } }
Error while running .navigateTo() protocol action: timeout
..........................

AbpApiExceptionFilterAttribute - A value is required but was not present in the request

I have a webapi controller like below:
[ResponseType(typeof(SampleDto))]
public IHttpActionResult GetSample(string name, string guid)
and the request take name and guid as query string like:
http://www.example.com/api/Controller1/GetSample?name=james&guid=
the guid is empty.
When I issue the request, there is an Error:
WARN 2018-09-29 07:04:21,361 [18 ] nHandling.AbpApiExceptionFilterAttribute - Method arguments are not valid! See ValidationErrors for details.
Abp.Runtime.Validation.AbpValidationException: Method arguments are not valid! See ValidationErrors for details.
WARN 2018-09-29 07:04:21,361 [18 ] nHandling.AbpApiExceptionFilterAttribute - There are 2 validation errors:
WARN 2018-09-29 07:04:21,361 [18 ] nHandling.AbpApiExceptionFilterAttribute - A value is required but was not present in the request. (guid.String)
WARN 2018-09-29 07:04:21,361 [18 ] nHandling.AbpApiExceptionFilterAttribute - A value is required but was not present in the request. (guid.String)
Where can I change the validation rule?

Spring #Scheduled fixedDelay is not working as expected

I have 2 jobs running asynchronously one which triggers everymin and another with a fixed delay.
#Scheduled(fixedDelay=30000)
public void runJob() {
try {
JobParameters jobParameters = new JobParametersBuilder().addLong("time",System.currentTimeMillis()).toJobParameters();
JobExecution execution=jobLauncher.run(job,jobParameters);
LOGGER.info(execution.getExitStatus());}
catch (Exception e) {
try {
throw new SystemException("Scheduler ERROR :: Error coocured during Job run "+e);
} catch (SystemException e1) {
LOGGER.error("Scheduler ERROR :: Error coocured during Job run "+e);
}
}
}
#Scheduled(cron = "0 0/1 * * * ?")
public void runJob2() {
try {
JobParameters jobParameters = new JobParametersBuilder().addLong("time",System.currentTimeMillis()).toJobParameters();
JobExecution execution=jobLauncher.run(job2,jobParameters);
LOGGER.info(execution.getExitStatus());}
catch (Exception e) {
try {
throw new SystemException("ERROR:: Exception occured"+e);
} catch (SystemException e1) {
LOGGER.error("ERROR:: JOB Launching exception happened"+e);
}
}
}
as fixedDelay says "the duration between the end of last execution and the start of next execution is fixed" but for me its triggering with a fixed delay between the starts of last and next execution.
2018-05-11 **12:48:00.016** INFO 2112 --- [ taskExecutor-5] o.s.b.c.l.support.SimpleJobLauncher : Job: [FlowJob: [name=systemStartJob]] launched with the following parameters: [{time=1526023080016}]
2018-05-11 12:48:00.016 INFO 2112 --- [ taskExecutor-5] org.sapient.t1automation.SystemListener : Intercepting system Job Execution - Before Job!
2018-05-11 12:48:00.017 INFO 2112 --- [ taskExecutor-5] o.s.batch.core.job.SimpleStepHandler : Executing step: [systemStartStep]
.
.
.
.
2018-05-11 **12:48:24.721** INFO 2112 --- [ taskExecutor-6] o.s.b.c.l.support.SimpleJobLauncher : Job: [FlowJob: [name=sendMailJob]] launched with the following parameters: [{time=1526023104706}]
2018-05-11 12:48:24.737 INFO 2112 --- [ taskExecutor-6] org.sapient.t1automation.MailListener : Intercepting Job Excution - Before Job!
2018-05-11 12:48:24.737 INFO 2112 --- [ taskExecutor-6] o.s.batch.core.job.SimpleStepHandler : Executing step: [sendMailStep1]
.
.
.
2018-05-11 12:48:44.533 INFO 2112 --- [ taskExecutor-6] org.sapient.t1automation.MailListener : Intercepting Job Excution - After Job!
2018-05-11 12:48:44.533 INFO 2112 --- [ taskExecutor-6] o.s.b.c.l.support.SimpleJobLauncher : Job: [FlowJob: [name=sendMailJob]] completed with the following parameters: [{time=1526023104706}] and the following status: [COMPLETED]
2018-05-11 12:48:45.001 INFO 2112 --- [ taskExecutor-3] o.s.t.service.mail.MailReader : Mail:: Mails to process. 1
2018-05-11 12:48:45.017 INFO 2112 --- [ taskExecutor-3] org.sapient.t1automation.MailListener : Intercepting Job Excution - After Job!
2018-05-11 12:48:45.017 INFO 2112 --- [ taskExecutor-3] o.s.b.c.l.support.SimpleJobLauncher : Job: [FlowJob: [name=sendMailJob]] completed with the following parameters: [{time=1526023044672}] and the following status: [COMPLETED]
here the time between start of 2 executions is 30s instead of between end of last and start of next execution.
Check the execution time of you code. It may be possible your code executes within a second, hence you cant see the time difference.
Sample example:
#Scheduled(fixedDelay = 3000)
private void test() {
System.out.println("test -> " + new Date());
try {
Thread.sleep(2000);
}
catch (Exception e) {
System.out.println("error");
}
}
Output
test -> Fri May 11 13:45:35 IST 2018
test -> Fri May 11 13:45:40 IST 2018
test -> Fri May 11 13:45:45 IST 2018
test -> Fri May 11 13:45:51 IST 2018
test -> Fri May 11 13:45:56 IST 2018
Here you can see, difference between every print is 5 seconds instead of 3 seconds.
For debugging, you can add logs at start and end of the code. Also, Thread.sleep() for delay.

yet another Could not contact Elasticsearch at http://logstash.example.com:9200

i have installed logstash+elasticsearch+kibana into one host and received the error from the title. I have googled all over the related topics, still no luck and yet stuck.
I will share the configs i have made:
elasticsearch.yml
cluster.name: hive
node.name: "logstash-central"
network.bind_host: 10.1.1.25
output from /var/log/elasticsearch/hive.log
[2015-01-13 15:18:06,562][INFO ][node ] [logstash-central] initializing ...
[2015-01-13 15:18:06,566][INFO ][plugins ] [logstash-central] loaded [], sites []
[2015-01-13 15:18:09,275][INFO ][node ] [logstash-central] initialized
[2015-01-13 15:18:09,275][INFO ][node ] [logstash-central] starting ...
[2015-01-13 15:18:09,385][INFO ][transport ] [logstash-central] bound_address {inet[/10.1.1.25:9300]}, publish_address {inet[/10.1.1.25:9300]}
[2015-01-13 15:18:09,401][INFO ][discovery ] [logstash-central] hive/T2LZruEtRsGPAF_Cx3BI1A
[2015-01-13 15:18:13,173][INFO ][cluster.service ] [logstash-central] new_master [logstash-central][T2LZruEtRsGPAF_Cx3BI1A][logstash.tw.intra][inet[/10.1.1.25:9300]], reason: zen-disco-join (elected_as_master)
[2015-01-13 15:18:13,193][INFO ][http ] [logstash-central] bound_address {inet[/10.1.1.25:9200]}, publish_address {inet[/10.1.1.25:9200]}
[2015-01-13 15:18:13,194][INFO ][node ] [logstash-central] started
[2015-01-13 15:18:13,209][INFO ][gateway ] [logstash-central] recovered [0] indices into cluster_state
accessing logstash.example.com:9200 gives the ordinary output like in ES guide:
{
"status" : 200,
"name" : "logstash-central",
"cluster_name" : "hive",
"version" : {
"number" : "1.4.2",
"build_hash" : "927caff6f05403e936c20bf4529f144f0c89fd8c",
"build_timestamp" : "2014-12-16T14:11:12Z",
"build_snapshot" : false,
"lucene_version" : "4.10.2"
},
"tagline" : "You Know, for Search"
}
accessing http://logstash.example.com:9200/_status? gives the following:
{"_shards":{"total":0,"successful":0,"failed":0},"indices":{}}
Kibanas config.js is default:
elasticsearch: "http://"+window.location.hostname+":9200"
Kibana is used via nginx. Here is /etc/nginx/conf.d/nginx.conf:
server {
listen *:80 ;
server_name logstash.example.com;
location / {
root /usr/share/kibana3;
Logstash config file is /etc/logstash/conf.d/central.conf:
input {
redis {
host => "10.1.1.25"
type => "redis-input"
data_type => "list"
key => "logstash"
}
output {
stdout{ { codec => rubydebug } }
elasticsearch {
host => "logstash.example.com"
}
}
Redis is working and the traffic passes between the master and slave (i've checked it via tcpdump).
15:46:06.189814 IP 10.1.1.50.41617 > 10.1.1.25.6379: Flags [P.], seq 89560:90064, ack 1129, win 115, options [nop,nop,TS val 3572086227 ecr 3571242836], length 504
netstat -apnt shows the following:
tcp 0 0 10.1.1.25:6379 10.1.1.50:41617 ESTABLISHED 21112/redis-server
tcp 0 0 10.1.1.25:9300 10.1.1.25:44011 ESTABLISHED 22598/java
tcp 0 0 10.1.1.25:9200 10.1.1.35:51145 ESTABLISHED 22598/java
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 22379/nginx
Could you please tell which way should i investigate the issue?
Thanks in advance
The problem is likely due to the nginx setup and the fact that Kibana, while installed on your server, is running in your browser and trying to access Elasticsearch from there. The typical way this is solved is by setting up a proxy in nginx and then changing your config.js.
You have what appears to be a correct proxy set up for nginx for Kibana but you'll need some additional work to have kibana be able to access Elasticsearch.
Check the comments on this post: http://vichargrave.com/ossec-log-management-with-elasticsearch/
And check this post: https://groups.google.com/forum/#!topic/elasticsearch/7hPvjKpFcmQ
And this sample nginx config: https://github.com/johnhamelink/ansible-kibana/blob/master/templates/nginx.conf.j2
You'll have to precise the protocol for elasticsearch in the output section
elasticsearch {
host => "logstash.example.com"
protocol => 'http'
}

Resources