Log4j2 encoding issue - elasticsearch

When I try to run Elasticsearch on Windows 10 as main language is English, everything works fine. But if I change the main language as Turkish, I get error messages as:
2018-07-26 14:42:39,485 main ERROR Unable to locate plugin type for IfFileName
2018-07-26 14:42:39,633 main ERROR Unable to locate plugin for IfAccumulatedFileSize
2018-07-26 14:42:39,634 main ERROR Unable to locate plugin for IfFileName
2018-07-26 14:42:39,637 main ERROR Unable to invoke factory method in class org.apache.logging.log4j.core.appender.rolling.action.DeleteAction for element Delete: java.lang.NullPointerException java.lang.NullPointerException
at org.apache.logging.log4j.core.config.plugins.visitors.PluginElementVisitor.findNamedNode(PluginElementVisitor.java:103)
at org.apache.logging.log4j.core.config.plugins.visitors.PluginElementVisitor.visit(PluginElementVisitor.java:87)
at org.apache.logging.log4j.core.config.plugins.util.PluginBuilder.generateParameters(PluginBuilder.java:248)
at org.apache.logging.log4j.core.config.plugins.util.PluginBuilder.build(PluginBuilder.java:135)
at org.apache.logging.log4j.core.config.AbstractConfiguration.createPluginObject(AbstractConfiguration.java:958)
at org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:898)
at org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:890)
at org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:890)
at org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:890)
at org.apache.logging.log4j.core.config.AbstractConfiguration.doConfigure(AbstractConfiguration.java:513)
at org.apache.logging.log4j.core.config.AbstractConfiguration.initialize(AbstractConfiguration.java:237)
at org.apache.logging.log4j.core.config.AbstractConfiguration.start(AbstractConfiguration.java:249)
at org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:545)
at org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:261)
at org.elasticsearch.common.logging.LogConfigurator.configure(LogConfigurator.java:163)
at org.elasticsearch.common.logging.LogConfigurator.configure(LogConfigurator.java:119)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:291)
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:121)
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:112)
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86)
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:124)
at org.elasticsearch.cli.Command.main(Command.java:90)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:85)
2018-07-26 14:42:39,645 main ERROR Null object returned for Delete in DefaultRolloverStrategy.
So it seem like a charset problem. The file is encoded as UTF-8, I check it with Notepad++. Elasticsearch has JVM option -Dfile.encoding=UTF-8. I double checked the log4j2.properties file and IfFileName has no space after it.
And if I change IfFileName as ıfFileName (which ı is a Turkish character - lower I) error becomes:
2018-07-26 14:54:25,819 main ERROR Unable to locate plugin type for ıfFileName
Does anyone have an idea about how to fix that?

Adding -Duser.language=en JVM parameter fixed the problem.

I had the same problem but didn't know where to add the -Duser.language=en. However, I found it out it is under the sonar.properties, the line where sonar.search.javaAdditionalOpts= is located remove the # at the begining and write as sonar.search.javaAdditionalOpts=-Duser.language=en and save the file.

This is a bug in Log4j2, which uses String#toLowerCase() without a locale parameter: in the Turkish locale IfFileName is lowercased as ıffilename (with a dotless i). I have reported this as GH issue #1281.
Until this is fixed you can write plugin types in all lowercase (English) letters: e.g. iffilename instead of IfFileName.

Related

Create Kafka connect without confluent

I recently started with Kafka and I try to create a Kafka connect to connect to oracle but I can't do it. The information I found is about confluent, but that does't work in Windows ... How can i configure one or create it with java?
I use for my test standalone conecction:
cmd .\windows\connect-standalone.bat .\config\connect-standalone.properties .\config\connect-bbdd.properties ->
name=jdbc-conector
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
tasks.max=1
connection.url=dbc:oracle:thin#localhost:xe
connection.user: user
connection.password: pwd
mode = bulk
topic.prefix=test
table.whitelist: mytable
Error:
WARN The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)
WARN The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)
WARN The configuration 'offset.storage.file.filename' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)
WARN The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)
jul 21, 2019 10:36:13 PM org.glassfish.jersey.internal.Errors logErrors
ADVERTENCIA: The following warnings have been detected: WARNING: The (sub)resource method createConnector in
org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation.
WARNING: The (sub)resource method listConnectors in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains
empty path annotation.
WARNING: The (sub)resource method listConnectorPlugins in org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource
contains empty path annotation.
WARNING: The (sub)resource method serverInfo in org.apache.kafka.connect.runtime.rest.resources.RootResource contains empty path annotation.
[2019-07-21 22:36:13,886] ERROR Failed to create job for ..\config\connect-bbdd.properties (org.apache.kafka.connect.cli.ConnectStandalone)
[2019-07-21 22:36:13,888] ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone)
Caused by: org.apache.kafka.connect.runtime.rest.errors.BadRequestException: Connector configuration
is invalid and contains the following 2 error(s):
Invalid value java.sql.SQLException: No suitable driver found for jdbc:oracle:thin#localhost:xe
for configuration Couldn't open connection to jdbc:oracle:thin#localhost:xe
You can also find the above list of errors at the endpoint `/{connectorType}/config/validate`
at org.apache.kafka.connect.util.ConvertingFutureCallback.result(ConvertingFutureCallback.java:79)
at org.apache.kafka.connect.util.ConvertingFutureCallback.get(ConvertingFutureCallback.java:66)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:118)
...and other errors from "any class loader (org.reflections.Reflections)"
The confluent command doesn't work natively in Windows, no.
But connect-distributed or connect-standalone are not only in Confluent, and should both work and load the JDBC connectors provided within Confluent Platform if you did download it on Windows.
Otherwise, if you have only Apache Kafka, you will need to download JDBC Connector separately and set it up yourself via the plugin.path property mentioned in the Connect config files.
This error that you get:
No suitable driver found for jdbc:oracle:thin#localhost:xe
for configuration Couldn't open connection to jdbc:oracle:thin#localhost:xe
is because you've not made the Oracle JDBC driver available. See https://www.confluent.io/blog/kafka-connect-deep-dive-jdbc-source-connector#jdbc-drivers.

NiFi fails to launch due to java.lang.IllegalArgumentException

I have been trying to launch NiFi, but everytime I do so I get the following error:
2019-03-06 18:53:46,935 ERROR [main] org.apache.nifi.NiFi Failure to
launch NiFi due to java.lang.IllegalArgumentException:
java.security.NoSuchAlgorithmException: md5 MessageDigest not
available java.lang.IllegalArgumentException:
java.security.NoSuchAlgorithmException: md5 MessageDigest not
available
at org.apache.nifi.nar.NarUnpacker.calculateMd5sum(NarUnpacker.java:419)
at org.apache.nifi.nar.NarUnpacker.unpackNar(NarUnpacker.java:228)
at org.apache.nifi.nar.NarUnpacker.unpackNars(NarUnpacker.java:123)
at org.apache.nifi.NiFi.(NiFi.java:128)
at org.apache.nifi.NiFi.(NiFi.java:71)
at org.apache.nifi.NiFi.main(NiFi.java:296) Caused by: java.security.NoSuchAlgorithmException: md5 MessageDigest not
available
at sun.security.jca.GetInstance.getInstance(GetInstance.java:159)
at java.security.Security.getImpl(Security.java:695)
at java.security.MessageDigest.getInstance(MessageDigest.java:167)
at org.apache.nifi.nar.NarUnpacker.calculateMd5sum(NarUnpacker.java:407)
... 5 common frames omitted 2019-03-06 18:53:46,939 INFO [Thread-1] org.apache.nifi.NiFi Initiating shutdown of Jetty web
server... 2019-03-06 18:53:46,940 INFO [Thread-1] org.apache.nifi.NiFi
Jetty web server shutdown completed (nicely or otherwise).
I understand this is coming from "calculateMd5sum " function that calculates md5 sum of a specified file. However, I have made no changes to any of Nars neither have I added any custom nars. The same instance did launch before.
I have also tried to start afresh by extracting the setup again, however I face the same error. I fail to understand why the issue is coming up all of a sudden. Please help!
I got it.
My java home pointed to "C:\Program Files\Java\jdk1.8.0_65"
changed the path to "C:\Program Files (x86)\Java\jre1.8.0_121"
It works fine now.
Thanks #BryanBende

Spring boot apache camel and apache camel XPATH

Apache came XPATH fails when parsing xml file with content .
Please find the route below
fromF("file://%s?recursive=true", inputDir)
.routeId("PollFiles")
.log("*** file found ${header.CamelFileName}")
.toF("file://%s?recursive=true",
archiveDir)
.log("*** file found ${body}")
//.convertBodyTo(String.class)
.choice().when()
.xpath("//Available[Class='package']"). log("*** found ${body}")
.end();
Error
org.apache.camel.TypeConversionException: Error during type conversion from type: java.lang.String to the required type: org.w3c.dom.Document with value [Body is instance of java.io.InputStream] due java.io.FileNotFoundException: /Users/solution//X1.DTD (No such file or directory)
Would appreciate your assistence
It's not xpath related, your error says:
java.io.FileNotFoundException: /Users/solution//n (No such file or directory)
that means the file supplied to your method does not exist.

WSO2 DAS 3.0.0 with API Manager 1.9.0 not working

I am using trying to use DAS 3.0.0 as replacement of BAM with WSO2 API Manager 1.9.0/1.9.1 with Oracle for WSO2AM_STATS_DB.
I am following http://blog.rukspot.com/2015/09/publishing-apim-runtime-statistics-to.html
I can see data in DAS's carbon dashboard in Data Explorer tables ORG_WSO2_APIMGT_STATISTICS_REQUEST and ORG_WSO2_APIMGT_STATISTICS_RESPONSE.
But data is not stored in Oracle. Therefore I am not able to see Statistics in publisher of AM. It keeps saying "Data publishing is enabled. Generate some traffic to see statistics."
I am getting following error in log:
[2015-12-08 13:00:00,022] INFO {org.wso2.carbon.analytics.spark.core.AnalyticsT
ask} - Executing the schedule task for: APIM_STAT_script for tenant id: -1234
[2015-12-08 13:00:00,037] INFO {org.wso2.carbon.analytics.spark.core.AnalyticsT
ask} - Executing the schedule task for: Throttle_script for tenant id: -1234
Exception in thread "dag-scheduler-event-loop" java.lang.NoClassDefFoundError: o
rg/xerial/snappy/SnappyInputStream
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:274)
at org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.sc
ala:66)
at org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.sc
ala:60)
at org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcas
t$TorrentBroadcast$$setConf(TorrentBroadcast.scala:73)
at org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.s
cala:80)
at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(Torre
ntBroadcastFactory.scala:34)
at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastMan
ager.scala:62)
at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1291)
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DA
GScheduler$$submitMissingTasks(DAGScheduler.scala:874)
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DA
GScheduler$$submitStage(DAGScheduler.scala:815)
at org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGSchedul
er.scala:799)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAG
Scheduler.scala:1426)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAG
Scheduler.scala:1418)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
Caused by: java.lang.ClassNotFoundException: org.xerial.snappy.SnappyInputStream
cannot be found by spark-core_2.10_1.4.1.wso2v1
at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(Bundl
eLoader.java:501)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.
java:421)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.
java:412)
at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(De
faultClassLoader.java:107)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
... 15 more
Am I missing something?
Can anyone please help me to figure out this issue?
Thanks in advance.
Move all the libraries(jars) into your project's /WEB-INF/lib. Now all the libraries/jars under /WEB-INF/lib will come under classpath.
use snappy-java jar file and it will work as you want.

Load CSV data to HBase using pig or hive

Hi I have created a pig script which loads data into hbase. My csv file is stored into hadoop location at /hbase_tables/zip.csv
Pig Script
register /home/hduser/pig-0.12.0/lib/pig-0.8.0-core.jar;
A = LOAD '/hbase_tables/zip.csv' USING PigStorage(',') as (id:chararray, zip:chararray, desc1:chararray, desc2:chararray, income:chararray);
STORE A INTO 'hbase://mydata' USING org.apache.pig.backend.hadoop.hbase.HBaseStorage('zip:zip,desc:desc1,desc:desc2,income:income');
when i execute it gives below error
Pig Stack Trace
ERROR 2017: Internal error creating job configuration.
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobCreationException: ERROR 2017: Internal error creating job configuration.
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:667)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.compile(JobControlCompiler.java:256)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:147)
at org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.execute(HExecutionEngine.java:378)
at org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1198)
at org.apache.pig.PigServer.execute(PigServer.java:1190)
at org.apache.pig.PigServer.access$100(PigServer.java:128)
at org.apache.pig.PigServer$Graph.execute(PigServer.java:1517)
at org.apache.pig.PigServer.executeBatchEx(PigServer.java:362)
at org.apache.pig.PigServer.executeBatch(PigServer.java:329)
at org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:112)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:169)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:141)
at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:90)
at org.apache.pig.Main.run(Main.java:510)
at org.apache.pig.Main.main(Main.java:107)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: hbase://mydata_logs
at org.apache.hadoop.fs.Path.initialize(Path.java:148)
at org.apache.hadoop.fs.Path.<init>(Path.java:71)
at org.apache.hadoop.fs.Path.<init>(Path.java:45)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:470)
... 20 more
Caused by: java.net.URISyntaxException: Relative path in absolute URI: hbase://mydata_logs
at java.net.URI.checkPath(URI.java:1804)
at java.net.URI.<init>(URI.java:752)
at org.apache.hadoop.fs.Path.initialize(Path.java:145)
... 23 more
Please let me know how i can import csv data file into hbase or if you have any alternate solution.
Seems like your problem is with "Relative path" in absolute URI: hbase://mydata_logs.
Are you sure the path is correct?
Probably table mydata_logs does not exist. Start: hbase shell and type list. Is your table mydata_logs on the list?
I had the same task once and have fully-working solution (actually, I'm not sure about commas in your third line of the code):
%default hbase_home `echo \$HBASE_HOME`;
%default tmp '/user/alexander/tmp/users_dump/k14'
set zookeeper.znode.parent '/hbase-unsecure';
set hbase.zookeeper.quorum 'dmp-hbase.local';
register $hbase_home/lib/zookeeper-3.4.5.jar;
register $hbase_home/hbase-0.94.20.jar;
UsersHdfs = LOAD '$tmp' using PigStorage('\t', '-schema');
store UsersHdfs into 'hbase://user_test' using
org.apache.pig.backend.hadoop.hbase.HBaseStorage(
'id:DEFAULT id:last_modified birth:year gender:female gender:male','-caster HBaseBinaryConverter'
);
That code works for me, maybe the matter is in you hbase configs.
You could provide your .csv file and we could talk about it in more details.

Resources