Elasticsearch&Logstash Log4j Vulnerabilities - elasticsearch

We have log4j vulnerabilities for Elasticsearch and Logstash in the following paths:
Path : /usr/share/Elasticsearch/lib/log4j-core-2.11.1.jar
Path : /usr/share/logstash/logstash-core/lib/jars/log4j-core-2.14.0.jar
Is there a workaround to fix the vulnerabilities? Otherwise, Is the only solution to upgrade the application version?
Logstash and Elasticsearch version: 7.13
Could you help me to solve the problem?
Thanks

Related

elasticsearch 7.9.0 jarhell lucene50.Lucene50StoredFieldsFormat$1 debian 10 cannot start

Any help on finding the issue with this.
I manually installed elasticsearch 7.9.0 (https://www.elastic.co/guide/en/elasticsearch/reference/current/deb.html)
I had to add lucene-backward-codecs-8.7.0.jar in /usr/share/elasticsearch/lib or I get the error below.
[2022-01-22T21:25:39,352][ERROR][o.e.b.Bootstrap ] [guest] Exception
java.lang.IllegalStateException: jar hell!
class: org.apache.lucene.codecs.lucene50.Lucene50StoredFieldsFormat$1
jar1: /usr/share/elasticsearch/lib/lucene-backward-codecs-8.7.0.jar
jar2: /usr/share/elasticsearch/lib/lucene-core-8.6.0.jar
before and if I remove the lucene-backward-codecs-8.7.0.jar
[2022-01-22T21:32:05,863][ERROR][o.e.b.Bootstrap ] [guest] Exception
java.lang.IllegalArgumentException: Could not load codec 'Lucene87'. Did you forget to add lucene-backward-codecs.jar?
Just encountered this exact issue while running elasticsearch in a docker container. I resolved it by removing all containers/images related to elasticsearch and repulling them.
I didn't see you mention Docker anywhere in your post, so I'm not sure how relevant this will be, but I figured I'd toss it out there just incase.

debezium content based routing configuration

I'm using confluent so I've installed dibezium connectors according to confluent docs using confluent-hub in connect.properties I do have entry
plugin.path=/usr/share/java,/opt/confluent-6.0.0/share/confluent-hub-components
I need to use io.debezium.transforms.ContentBasedRouter https://debezium.io/documentation/reference/1.3/configuration/content-based-routing.html
so according to debezium doc I've downloaded debezium-scripting-1.3.1.Final.jar
and put it into
/opt/confluent-6.0.0/share/confluent-hub-components/ and copied it into
/opt/confluent-6.0.0/share/confluent-hub-components/debezium-debezium-connector-sqlserver/lib directories
here the entries in my mysql_src.json connector
"transforms": "unwrap,route",
"transforms.unwrap.type": "io.debezium.transforms.ExtractNewRecordState",
"transforms.unwrap.add.fields": "source.snapshot",
"transforms.route.type": "io.debezium.transforms.ContentBasedRouter",
"transforms.route.language": "jsr223.groovy",
"transforms.route.topic.expression": "value.__source_snapshot == 'false' ? 'test'"
when I'm trying to configure/load this connector I'm getting following error message
[2020-12-15 22:18:45,351] ERROR [Worker clientId=connect-1, groupId=connect-cluster] Failed to reconfigure connector's tasks, retrying after backoff: (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1369)
java.lang.NoClassDefFoundError: io/debezium/DebeziumException
Any suggestions how to fix this problem ?
According the docs, you need to additionally obtain a JSR-223 script engine implementation and add its contents to the Debezium plug-in directories of your Kafka Connect environment, since:
Debezium does not come with any implementations of the JSR 223 API. To use an expression language with Debezium, you must download the JSR 223 script engine implementation for the language, and add to your Debezium connector plug-in directories, along any other JAR files used by the language implementation.
I am not sure that configuration is correct but I passed first configuration problem (I hope) I'm facing another problem now which I will describe in different question.
I am not sure what was wrong, I did following
Clean up zookeeper directories
Clean up kafka directories
Run kafka in distributed mode using command line start/stop scripts (not using confluent cli)
this solved java.lang.NoClassDefFoundError: io/debezium/DebeziumException
error

org.apache.kylin.job.exception.ExecuteException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/serde2/typeinfo/TypeInfo

I find similar error on https://issues.apache.org/jira/browse/KYLIN-2511
env:
hadoop-2.7.1
hbase-1.3.2
apache-hive-2.1.1-bin
apache-kylin-1.6.0-hbase1.x-bin
I've tried copy all the hive libs to kylin, but get another ERROR.
org.apache.hadoop.mapred.YarnChild: Error running child : java.lang.NoClassDefFoundError: org/apache/hadoop/hive/serde2/typeinfo/TypeInfo
The missing class should be in hive-exec-.jar; Check and debug the "bin/find-hive-dependency.sh" to see why it wasn't able to locate this jar from your server. You can manually add it to the "hive_exec_path" variable.
BTW, Kylin 1.6 is quite old, try to upgrade to a 2.x version.
Why you just try the method mentioned in https://issues.apache.org/jira/browse/KYLIN-2511. You'd better prepare the env according to the document of v16. It is better for using the latest version of Kylin. It has more feature and fixes some bugs.

Error reading field 'topic_metadata': Error reading array of size 873589, only 41 bytes available

I have installed logstash version 5.2.2 by downloading zip file in a VM having fresh Ubuntu installed in it.
I have created a sample config file logstash-sample.conf with the following entry
input{
stdin{ }
}
output{
stdout{ }
}
And executing the command $bin/logstash -f logstash-simple.conf
it is running absolutely fine.
Now in the same Ubuntu machine, I installed kafka by following the exact same process mentioned
here https://www.digitalocean.com/community/tutorials/how-to-install-apache-kafka-on-ubuntu-14-04 and followed till step no 7.
Then I modified the logstash-sample.conf file to contain the following
input {
kafka{
bootstrap_servers => "localhost:9092"
topics => ["TutorialTopic"]
}
}
output {
stdout { codec => rubydebug }
}
And this time I am getting the following error,
sample#sample-VirtualBox:~/Downloads/logstash-5.2.2$ bin/logstash -f logstash-sample.conf
Sending Logstash's logs to /home/rs-switch/Downloads/logstash-5.2.2/logs which is now configured via log4j2.properties
[2017-03-07T00:26:25,629][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>125}
[2017-03-07T00:26:25,650][INFO ][logstash.pipeline ] Pipeline main started
[2017-03-07T00:26:26,039][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
log4j:WARN No appenders could be found for logger (org.apache.kafka.clients.consumer.ConsumerConfig).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Exception in thread "Ruby-0-Thread-14: /home/rs-switch/Downloads/logstash-5.2.2/vendor/bundle/jruby/1.9/gems/logstash-input-kafka-5.1.6/lib/logstash/inputs/kafka.rb:229" org.apache.kafka.common.protocol.types.SchemaException: Error reading field 'topic_metadata': Error reading array of size 873589, only 41 bytes available
at org.apache.kafka.common.protocol.types.Schema.read(org/apache/kafka/common/protocol/types/Schema.java:73)
at org.apache.kafka.clients.NetworkClient.parseResponse(org/apache/kafka/clients/NetworkClient.java:380)
at org.apache.kafka.clients.NetworkClient.handleCompletedReceives(org/apache/kafka/clients/NetworkClient.java:449)
at org.apache.kafka.clients.NetworkClient.poll(org/apache/kafka/clients/NetworkClient.java:269)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.java:360)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.java:224)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.java:192)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.java:163)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(org/apache/kafka/clients/consumer/internals/AbstractCoordinator.java:179)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(org/apache/kafka/clients/consumer/KafkaConsumer.java:974)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(org/apache/kafka/clients/consumer/KafkaConsumer.java:938)
at java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:498)
at RUBY.thread_runner(/home/rs-switch/Downloads/logstash-5.2.2/vendor/bundle/jruby/1.9/gems/logstash-input-kafka-5.1.6/lib/logstash/inputs/kafka.rb:239)
at java.lang.Thread.run(java/lang/Thread.java:745)
[2017-03-07T00:26:28,742][WARN ][logstash.agent ] stopping pipeline {:id=>"main"}
Can anyone help me out resolving this issue? I am really stuck setting up ELK from last few weeks, but was not successful.
You most probably have a version conflict that is causing this issue. Check out the compatibility matrix in the Logstash Kafka input plugins documentation.
The link you mentioned for installing Kafka has you install version 0.8.2.1 which will not work with Kafka 0.10 clients. Kafka has version checking and backwards compatibility, but only if the broker is newer than the client, which is not the case here.
I'd recommend installing a current version of Kafka, there have been immense improvements since version 0.8 that you'd be missing out on if you tried downgrading Logstash instead.
Check out the Confluent Platform Quickstart for an easy way to get started.

Elasticsearch 2.1.0

Elasticsearch version 2.1.0 is not connecting to kibana 4.3. I'm seeing failed to delete temp file error
[2015-12-10 08:20:30,891][INFO ][gateway ] [Mass Master] recovered [1] indices into cluster_state
[2015-12-10 08:20:31,219][WARN ][index.ny ranslog ] [Mass Master] [.kibana][0] failed to delete temp file /home/ec2-user/elasticsearch-2.1.0/data/elasticsearch/nodes/0/indices/.kibana/0/translog/translog-6795115948573540946.tlog
java.nio.file.NoSuchFileException: /home/ec2-user/elasticsearch-2.1.0/data/elasticsearch/nodes/0/indices/.kibana/0/translog/translog-6795115948573540946.tlog
Referred this link https://github.com/elastic/elasticsearch/pull/14872 not able to get it. Any help would be highly appreciated.
Try using kibana and elasticsearch of same versions. E.g. kibana 4.3 for ES 4.3. Version incompatibility might be an issue in your case.

Resources