Packetbeat throws Bulk item insert failed error - elasticsearch

Packetbeat throws following error
Bulk item insert failed
When the following processor is added to packetbeat.yml
processors.include_fields.fields: ["http.request.body"]
Error log
2018-06-04T00:37:40.893+0530 ERROR pipeline/output.go:92 Failed to publish events: temporary bulk send failure
2018-06-04T00:37:40.893+0530 DEBUG [elasticsearch] elasticsearch/client.go:666 ES Ping(url=http://localhost:9200)
2018-06-04T00:37:40.894+0530 DEBUG [elasticsearch] elasticsearch/client.go:689 Ping status code: 200
2018-06-04T00:37:40.894+0530 INFO elasticsearch/client.go:690 Connected to Elasticsearch version 6.2.2
2018-06-04T00:37:40.894+0530 DEBUG [elasticsearch] elasticsearch/client.go:708 HEAD http://localhost:9200/_template/packetbeat-6.2.4 <nil>
2018-06-04T00:37:40.895+0530 INFO template/load.go:73 Template already exists and will not be overwritten.
2018-06-04T00:37:40.896+0530 DEBUG [elasticsearch] elasticsearch/client.go:303 PublishEvents: 1 events have been published to elasticsearch in 1.245631ms.
2018-06-04T00:37:40.896+0530 DEBUG [elasticsearch] elasticsearch/client.go:507 Bulk item insert failed (i=0, status=500): {"type":"string_index_out_of_bounds_exception","reason":"String index out of range: 0"}
Environment: elasticsearch version - 6.2.4
packetbeat version - 6.2.4

I managed to find the root course for this error. It was when adding following to
packetbeat.yml
index: "packetbeat-%{[beat.version]}-%{+yyyy.MM.dd.HH}"
when I removed it problem disappeared. seems to be a bug with custom index naming

Related

io.fabric8.kubernetes.client.KubernetesClientException: Failure executing

I am trying to install cloudflow 2.0.25 version in the eks cluster using helm. But the pod goes to CrashLoopBackOff status with below error:
ERROR [ActorSystemImpl] - Unexpected error starting cloudflow operator, terminating.
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://172.20.0.1/apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions. Message: the server could not find the requested resource. Received status: Status(apiVersion=v1, code=404, details=StatusDetails(causes=[], group=null, kind=null, name=null, retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, message=the server could not find the requested resource, metadata=ListMeta(_continue=null, remainingItemCount=null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=NotFound, status=Failure, additionalProperties={}).
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:570)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:509)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:474)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:435)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleCreate(OperationSupport.java:250)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleCreate(BaseOperation.java:871)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:366)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:85)
at cloudflow.operator.Main$.checkCRD(Main.scala:140)
at cloudflow.operator.Main$.main(Main.scala:61)
at cloudflow.operator.Main.main(Main.scala)
Please help how to resolve the issue ?
I tried creating crd and then also created custom_roles and rolebindings but it didn't work.

Filebeat index is getting created but with 0 documents

I am trying to index my custom log file using filebeat. I am successfully running filebeat with pre-built modules like mysql, nginx etc. But when I actually try to use it with my application specific log file, index is created with 0 documents.
I could not find anywhere in the filebeats document if there are any specific steps need to be taken to ensure indexing takes place for the custom log files.
I did not get any error when I setup filebeats or run filebeats post setup.
Below is the filebeat.yml:
filebeat.inputs:
- type: log
enabled: true
paths:
- /Applications/MAMP/htdocs/247around-adminp-aws/application/logs/log-2020-12-21.log
include_lines: ['^INFO', '^ERROR']
fields:
app_id: crm
filebeat.config.modules:
setup.template.settings:
index.number_of_shards: 1
path: ${path.config}/modules.d/*.yml
setup.kibana:
output.elasticsearch:
hosts: ["localhost:9200"]
processors:
As can be seen, it is majorly default .yml file with very minor changes.
My custom log file log-2020-12-21.php is:
INFO - 2020-12-21 15:10:26 --> index Logging details have been captured for employee. Details are : Array
INFO - 2020-12-21 15:10:36 --> editpartner partner_id:1
INFO - 2020-12-21 15:10:36 --> SELECT DISTINCT service_id, brand, active
ERROR - 2020-12-21 15:10:36 --> Query error: Expression #1 of SELECT list is not in GROUP BY clause and contains nonaggregated column 'boloaaka.collateral.id' which is not functionally dependent on columns in GROUP BY clause; this is incompatible with sql_mode=only_full_group_by
INFO - 2020-12-21 15:10:36 --> Database Error: A Database Error Occurred<br/>Array
ERROR - 2020-12-21 15:10:54 --> Query error: Expression #5 of SELECT list is not in GROUP BY clause and contains nonaggregated column 'boloaaka.service_centres.district' which is not functionally dependent on columns in GROUP BY clause; this is incompatible with sql_mode=only_full_group_by
INFO - 2020-12-21 15:10:54 --> Database Error: A Database Error Occurred<br/>Array
INFO - 2020-12-21 23:53:21 --> Loginindex
INFO - 2020-12-21 23:54:50 --> Loginindex
INFO - 2020-12-21 23:55:42 --> Loginindex
INFO - 2020-12-21 23:56:24 --> Loginindex
Index file is getting created with 0 documents:
Log file showing logs for filebeats setup and filebeats running:
https://pastebin.com/TK6uYXuq
Please help:
Why there are no error messages if something is wrong because of which documents are not getting indexed? I should be getting some error if things are not right.
How should I index my log file?
Where should I add pattern for my log file like key-value pair which would help me in searching the documents for relevant values later on?
Thanks for your help.
In your filebeat configuration, are you sure you are referring to the exact file where your logs are stored? Your 'paths' in filebeat.yml is referring to a .log file extension while the custom log file you've pasted is log-2020-12-21.php Try changing your paths to match this .php extension instead.
If filebeat correctly picks this file up, you could see something like the code below in your filebeat logs
INFO log/harvester.go:287 Harvester started for file: /Applications/MAMP/htdocs/247around-adminp-aws/application/logs/log-2020-12-21.php

InvalidMagicIdException: Not able to add cache in infinispan

I am working on infinispan... But while adding cache to it with rest
POST /rest/v2/caches/{cacheName}/{cacheKey} and with nifi also, but it gives me a following error
12:17:55,695 ERROR [org.infinispan.server.hotrod.BaseRequestProcessor] (HotRod-ServerIO-5-1) ISPN005003: Exception reported: org.infinispan.server.hotrod.InvalidMagicIdException: Error reading magic byte or message id: 10
at org.infinispan.server.hotrod:ispn-10.0#10.0.0.Beta3//org.infinispan.server.hotrod.HotRodDecoder.switch0(HotRodDecoder.java:208)
at org.infinispan.server.hotrod:ispn-10.0#10.0.0.Beta3//org.infinispan.server.hotrod.HotRodDecoder.switch1_0(HotRodDecoder.java:153)
at org.infinispan.server.hotrod:ispn-10.0#10.0.0.Beta3//org.infinispan.server.hotrod.HotRodDecoder.decode(HotRodDecoder.java:143)
at io.netty:ispn-10.0#4.1.30.Final//io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:502)
at io.netty:ispn-10.0#4.1.30.Final//io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:441)
at io.netty:ispn-10.0#4.1.30.Final//io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:278)
I am using docker pull jboss/infinispan-server 10.0.0.beta and 10.0.0.CR1-3.
I ma not getting any clue to trace the issue.

Loading logs in one machine into elasticsearch located setup in another machine using logstash

I have my logs and logstash running on the one EC2 machine (M1), so I read my logs placed on my local machine with this config:
input {
file{
path => "/path/to/logs/in/M1"
start_position => "beginning"
}
}
Now, we have elasticsearch running on a different EC2 machine (M2) and I need to transfer the logs from M1 to elasticsearch in M2 using logstash. I used the following output config:
output {
stdout { codec => rubydebug }
elasticsearch {
hosts => "http://<M2 ip address>:9200"
index => "logstash-%{+YYYY.MM.dd}"
}
}
When I run the config file, I get the following error:
04:18:57.640 [[main]>worker0] WARN logstash.outputs.elasticsearch - UNEXPECTED POOL ERROR {:e=>#<LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError: No Available connections>}
04:18:57.646 [[main]>worker0] ERROR logstash.outputs.elasticsearch - Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>2}
04:18:59.682 [[main]>worker0] WARN logstash.outputs.elasticsearch - UNEXPECTED POOL ERROR {:e=>#<LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError: No Available connections>}
04:18:59.686 [[main]>worker0] ERROR logstash.outputs.elasticsearch - Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>4}
04:19:01.109 [Ruby-0-Thread-17: /usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.4.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:188] WARN logstash.outputs.elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#<URI::HTTP:0x1d08c988 URL:http://10.60.40.120:9200>, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://10.60.40.120:9200][Manticore::ConnectTimeout] connect timed out"}
04:19:02.111 [Ruby-0-Thread-17: /usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.4.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:188] INFO logstash.outputs.elasticsearch - Running health check to see if an Elasticsearch connection is working {:url=>#<URI::HTTP:0x55444fcf URL:http://10.60.40.120:9200>, :healthcheck_path=>"/"}
I am new to logstash. Any help is appreciated.
UPDATE:
So I looked around in forumns and I got one solution which told me to update logstash output using the command:
sudo /usr/share/logstash/bin/logstash-plugin update logstash-output-elasticsearch
I also updated the logstash config file to include username and password:
output {
stdout { codec => rubydebug }
elasticsearch {
hosts => ["<M2 ip address>"]
user => 'username'
password => 'changeme'
index => "logstash-%{+YYYY.MM.dd}"
manage_template => false
}
}
Now I'm getting a different error. Pleas help:
09:16:21.305 [[main]>worker0] WARN logstash.outputs.elasticsearch - Could not index event to Elasticsearch. {:status=>404, :action=>["index", {:_id=>nil, :_index=>"logstash-2017.04.17", :_type=>"Messagelog", :_routing=>nil}, 2017-04-17T10:06:11.348Z ip-10-60-40-201 No valid licenses found for COLL], :response=>{"index"=>{"_index"=>"logstash-2017.04.17", "_type"=>"Messagelog", "_id"=>nil, "status"=>404, "error"=>{"type"=>"index_not_found_exception", "reason"=>"no such index and [action.auto_create_index] ([.security,.monitoring*,.watches,.triggered_watches,.watcher-history*]) doesn't match", "index_uuid"=>"_na_", "index"=>"logstash-2017.04.17"}}}}
Thanks.
It looks like you have disable auto creation of index on elasticsearch. By default elasticsearch supports auto creation of indexes.
Remove
action.auto_create_index: -b*,+a*,-*
(whatever the pattern) in your elasticsearch.yml and you will be good.
Furthermore if you want to accept auto creation of indexes starting with l used the pattern +l*. That is by adding
action.auto_create_index: +l*
Read this for additional informations.

Timeout on deleting a snapshot repository

I'm running elasticsearch 1.7.5 w/ 19 nodes (12 data nodes).
Attempting to setup snapshots for backup and recovery - but am getting a 503 on creation and deletion of a snapshot repository.
curl -XDELETE 'localhost:9200/_snapshot/backups?pretty'
returns:
{
"error" : "RemoteTransportException[[masternodename][inet[/10.0.0.20:9300]][cluster:admin/repository/delete]]; nested: ProcessClusterEventTimeoutException[failed to process cluster event (delete_repository [backups]) within 30s]; ",
"status" : 503
}
I was able to adjust the query w/ a master_timeout=10m - still getting a timeout. Is there a way to debug the cause of this request failing?
Performance on this call seems to be related to pending tasks with a higher priority.
https://discuss.elastic.co/t/timeout-on-deleting-a-snapshot-repository/69936/4

Resources