cli command is working differently in JBoss EAP 7.x as compared to JBoss EAP 6.x - jboss-eap-7

In JBoss EAP 6.x, below are details of cli command and its response, to get the path of the deoployed archive file (picked from deployment-scanner subsystem)
cli -
/deployment=helloworld.war:read-attribute(name=content)
response -
{
"outcome" => "success",
"result" => [{
"path" => "deployments\\cluster-demo.war",
"relative-to" => "jboss.server.base.dir",
"archive" => true
}]
}
As we can see above response is giving proper deployment path.
But when I run the same cli command in JBoss EAP 7.1, it is giving me below response ---
{
"outcome" => "success",
"result" => [{"hash" => bytes {
0xe4, 0x51, 0x63, 0x04, 0x61, 0x2d, 0xd6, 0x29,
0xac, 0xeb, 0xe1, 0x62, 0x85, 0x3e, 0x52, 0x78,
0x50, 0x13, 0x82, 0x6e
}}]
}
With the above response, I need to prepare the deployment path by adding those bytes using java string builder. Also the path string prepared is different than the one we are receiving in JBoss EAP 6.x ( path of deployment folder, identified by deployment-scanner subsystem).
Below is the prepared path --
e4/516304612dd629acebe162853e52785013826e/content
Above path is relative to JBoss/standalone/data/content folder, so the whole path is --
<JBoss EAP installation directory>\standalone\data\content
here content is the deployed file. Please note that here the file is without extension, but I open the same in winrar I can see the content of actual archive file.
My concern is, is there any cli command available in JBoss EAP 7.x, through which I can get the same response, as I am getting in JBoss EAP 6.x.
Please help!!
Thanks
Rahul

Related

Logstash not ingesting content into elasticsearch

I have installed elasticsearch-8.2.3 logstash-8.2.3 and kibana-8.2.3 I have configure the logstash conf file to ingest content into elasticsearch, logstash run without any error but it is not ingesting the content.
Below is the conf file:
input {
#stdin {type => "stdin-type" }
file
{
path => "D:/logstash-8.2.3/inspec/*.*"
type => "file"
start_position=>"beginning"
sincedb_path => "NUL"
ignore_older => 0
}
}
filter {
csv
{
columns =>
[
"itemid","itemtitle","rlabel","ayear","rid","rsid","anotatedby","anotatetime","antype","astate","broaderlevel3","broaderlevel2","broaderlevel1","categorylabel","toppreferedlabel"
]
separator => ","
remove_field => ["type","host"]
}
mutate
{
split => { "antype" => ";" }
split => { "broaderlevel3" => ";" }
split => { "broaderlevel2" => ";" }
split => { "broaderlevel1" => ";" }
split => { "categorylabel" => ";" }
split => { "toppreferedlabel" => ";" }
}
}
output {
stdout { }
elasticsearch
{
hosts => ["localhost"]
index => "iet-tv"
}
}
I don't get any error message while running logstash but content not getting ingested into Elasticsearch.
Below is the log:
[2022-06-29T14:03:03,579][INFO ][logstash.runner ] Log4j configuration path used is: D:\logstash-8.2.3\config\log4j2.properties
[2022-06-29T14:03:03,595][WARN ][logstash.runner ] The use of JAVA_HOME has been deprecated. Logstash 8.0 and later ignores JAVA_HOME and uses the bundled JDK. Running Logstash with the bundled JDK is recommended. The bundled JDK has been verified to work with each specific version of Logstash, and generally provides best performance and reliability. If you have compelling reasons for using your own JDK (organizational-specific compliance requirements, for example), you can configure LS_JAVA_HOME to use that version instead.
[2022-06-29T14:03:03,598][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"8.2.3", "jruby.version"=>"jruby 9.2.20.1 (2.5.8) 2021-11-30 2a2962fbd1 OpenJDK 64-Bit Server VM 11.0.15+10 on 11.0.15+10 +indy +jit [mswin32-x86_64]"}
[2022-06-29T14:03:03,600][INFO ][logstash.runner ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -Djruby.jit.threshold=0, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED]
[2022-06-29T14:03:03,736][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2022-06-29T14:03:11,340][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2022-06-29T14:03:12,628][INFO ][org.reflections.Reflections] Reflections took 153 ms to scan 1 urls, producing 120 keys and 395 values
[2022-06-29T14:03:15,580][INFO ][logstash.javapipeline ] Pipeline `main` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[2022-06-29T14:03:15,662][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost"]}
[2022-06-29T14:03:16,210][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2022-06-29T14:03:16,532][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2022-06-29T14:03:16,549][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch version determined (8.2.3) {:es_version=>8}
[2022-06-29T14:03:16,553][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>8}
[2022-06-29T14:03:16,627][INFO ][logstash.outputs.elasticsearch][main] Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[2022-06-29T14:03:16,627][INFO ][logstash.outputs.elasticsearch][main] Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[2022-06-29T14:03:16,632][WARN ][logstash.outputs.elasticsearch][main] Elasticsearch Output configured with `ecs_compatibility => v8`, which resolved to an UNRELEASED preview of version 8.0.0 of the Elastic Common Schema. Once ECS v8 and an updated release of this plugin are publicly available, you will need to update this plugin to resolve this warning.
[2022-06-29T14:03:16,652][INFO ][logstash.filters.csv ][main] ECS compatibility is enabled but `target` option was not specified. This may cause fields to be set at the top-level of the event where they are likely to clash with the Elastic Common Schema. It is recommended to set the `target` option to avoid potential schema conflicts (if your data is ECS compliant or non-conflicting, feel free to ignore this message)
[2022-06-29T14:03:16,694][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>8, :ecs_compatibility=>:v8}
[2022-06-29T14:03:16,762][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["D:/logstash-8.2.3/conf/inspec.conf"], :thread=>"#<Thread:0x48e38277 run>"}
[2022-06-29T14:03:18,017][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>1.25}
[2022-06-29T14:03:18,102][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
[2022-06-29T14:03:18,171][INFO ][filewatch.observingtail ][main][2c845ee5978dc5ed1bf8d0f617965d2013df9d31461210f0e7c2b799e02f6bb8] START, creating Discoverer, Watch with file and sincedb collections
[2022-06-29T14:03:18,220][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
Any suggestions much appreciated.
Thanks
Dharmendra Kumar Singh
In filebeat, ignore_older => 0 turns off age-based filtering. In a logstash file input it tells the filter to ignore any file more than zero seconds old, and since the file input sleeps between its periodic polls for new files, that can mean it ignores all files, even if they are being updated.
In my case (Windows 10, Logstash 8.1.0), the file path with back-slashes ( C:\path\to\csv\etc.CSV ) caused the same issue, changing back-slashes to forward-slashes fixed the problem.
Here is a working logstash config:
input {
file {
path => "C:/path/to/csv/file.csv"
type => "file"
start_position => "beginning"
sincedb_path => "NUL"
}
}
filter {
csv {
columns =>
[
"WID","LID","IID","Product","QTY","TID"
]
separator => ","
}
mutate {
rename => {
"WID" => "w_id"
"LID" => "l_id"
"IID" => "i_id"
"Product" => "product"
"QTY" => "quantity"
}
convert => {
"w_id" => "integer"
"l_id" => "integer"
"i_id" => "integer"
"quantity" => "float"
}
remove_field => [
"#timestamp",
"#version",
"host",
"message",
"type",
"path",
"event",
"log",
"TID"
]
}
}
output {
elasticsearch {
action => "index"
hosts => ["https://127.0.0.1:9200"]
index => "product_inline"
}
stdout { }
}

UseEncyption does not play nicely with AWS sns/sqs, messages unable to be delivered to SQS

I have got the following sample project
https://gitlab.com/sunnyatticsoftware/sandbox/issue-localstack-masstransit
It uses the latest MassTransit.AmazonSQS 7.2.2 with localstack 0.11.2 running SNS/SQS as a docker container.
I have a sample console application that publishes 2 messages, and the consumer receives them and displays on console.
All is good with that.
Now I want to add encryption to all the messages so I add the following
var aes = new AesCryptoServiceProvider();
aes.GenerateKey();
var key = aes.Key;
to the bus factory using the IAmazonSqsBusFactoryConfigurator for both publisher and consumer (both the same key, of course).
After doing so, my receivers don't get any message. No errors.
This is the working code
This is the reproducible issue after introducing the c.UseEncryption(key);
Steps to reproduce:
Run localstack with sns/sqs
docker run -it -e SERVICES=sns,sqs -e TEST_AWS_ACCOUNT_ID="000000000000" -e DEFAULT_REGION="us-east-1" -e LOCALSTACK_HOSTNAME="localhost" -e -rm --privileged --name localstack_main -p 4566:4566 -p 4571:4571 -p 8080-8081:8080-8081 -v "/tmp/localstack:/tmp/localstack" -v "/var/run/docker.sock:/var/run/docker.sock" -e DOCKER_HOST="unix:///var/run/docker.sock" -e HOST_TMP_FOLDER="/tmp/localstack" "localstack/localstack:0.11.2"
Clone repository
Run application
cd sample/Issue.Localstack.EventBus.MassTransit.SnsSqs.Sample/
dotnet run
You will see
Sample started..
Listening to events..
Publishing events..
Press enter to exit
but if you comment out the c.UseEncryption(key); in both ConsumerEndpointFactory.cs and PublishEndpointFactory.cs and re-run the console app you'll see that the 2 published events are properly received.
Sample started..
Listening to events..
Publishing events..
Press enter to exit
Received Bar: Bar, Message=this is a bar message
Received Foo: Foo, Message=this is a foo message
Is there a problem with this AmazonSQS implementation? Or is some incompatibility with something in localstack?
Some samples also found in the public repo
public static PublisherBusControl Create(SnsSqsPublisherOptions snsSqsPublisherOptions, byte[] key)
{
var busControl = Bus.Factory.CreateUsingAmazonSqs(c =>
{
//c.UseEncryption(key);
var hostAddress = snsSqsPublisherOptions.HostAddress;
var hostConfigurator = new AmazonSqsHostConfigurator(hostAddress);
hostConfigurator.AccessKey(snsSqsPublisherOptions.AccessKey);
hostConfigurator.SecretKey(snsSqsPublisherOptions.SecretKey);
if (snsSqsPublisherOptions.HostConfiguration != null)
{
var hostConfiguration = snsSqsPublisherOptions.HostConfiguration;
hostConfigurator.Config(new AmazonSimpleNotificationServiceConfig {ServiceURL = hostConfiguration.SnsServiceUrl});
hostConfigurator.Config(new AmazonSQSConfig {ServiceURL = hostConfiguration.SqsServiceUrl});
}
c.Host(hostConfigurator.Settings);
});
var publisherBusControl = new PublisherBusControl(busControl);
return publisherBusControl;
}
public static ConsumerBusControl Create(SnsSqsConsumerOptions snsSqsConsumerOptions, Action<IAmazonSqsReceiveEndpointConfigurator> endpointConfigurator, byte[] key)
{
var busControl = Bus.Factory.CreateUsingAmazonSqs(c =>
{
//c.UseEncryption(key);
var hostAddress = snsSqsConsumerOptions.HostAddress;
var hostConfigurator = new AmazonSqsHostConfigurator(hostAddress);
hostConfigurator.AccessKey(snsSqsConsumerOptions.AccessKey);
hostConfigurator.SecretKey(snsSqsConsumerOptions.SecretKey);
if (snsSqsConsumerOptions.HostConfiguration != null)
{
var hostConfiguration = snsSqsConsumerOptions.HostConfiguration;
hostConfigurator.Config(new AmazonSimpleNotificationServiceConfig {ServiceURL = hostConfiguration.SnsServiceUrl});
hostConfigurator.Config(new AmazonSQSConfig {ServiceURL = hostConfiguration.SqsServiceUrl});
}
c.Host(hostConfigurator.Settings);
c.ReceiveEndpoint(snsSqsConsumerOptions.QueueName, endpointConfigurator);
});
var consumerBusControl = new ConsumerBusControl(busControl);
return consumerBusControl;
}
UPDATE 2021/09/07
I've been investigating further. With localstack sometimes it gets delivered, sometimes it doesn't. But more importantly, with AWS SNS/SQS, I can see the messages are not delivered from SNS to SQS. I enabled cloudwatch on a SNS topic and I can see the following kind of errors
{
"notification": {
"messageMD5Sum": "b9b92f5125c1b95bc3981e89a3e0b56c",
"messageId": "e2f3a853-2818-58e3-ae32-f1e26bd01855",
"topicArn": "arn:aws:sns:eu-west-1:754027052283:Sample_EventBus_MassTransit_SnsSqs_Common-IBar",
"timestamp": "2021-09-07 10:40:08.658"
},
"delivery": {
"deliveryId": "88e71359-b405-5795-9fca-4ae8f7cda555",
"destination": "arn:aws:sqs:eu-west-1:754027052283:sample-di-queue",
"providerResponse": "{\"ErrorCode\":\"InvalidMessageContents\",\"ErrorMessage\":\"Invalid binary character '#x18' was found in the message body, the set of allowed characters is #x9 | #xA | #xD | [#x20-#xD7FF] | [#xE000-#xFFFD] | [#x10000-#x10FFFF].\",\"sqsRequestId\":\"Unrecoverable\"}",
"dwellTimeMs": 29,
"attempts": 1,
"statusCode": 400
},
"status": "FAILURE"
}
Sometimes the error is different
"{\"ErrorCode\":\"InvalidMessageContents\",\"ErrorMessage\":\"Invalid binary character '#x1' was found in the message body, the set of allowed characters is #x9 | #xA | #xD | [#x20-#xD7FF] | [#xE000-#xFFFD] | [#x10000-#x10FFFF].\",\"sqsRequestId\":\"Unrecoverable\"}",
I have tried with different keys.
var key = Convert.ToBase64String(new byte[]{ 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0A, 0x0B, 0x0C, 0x0D, 0x0E, 0x0F });
and
var aes = new AesCryptoServiceProvider();
aes.GenerateKey();
var key = Convert.ToBase64String(aes.Key);
and then converting back from base64 string like
var encryptionKey = sqsConfig.UseEncryption(key);
sqsConfig.UseEncryption(key);
No luck. The payload is not supported.

Error while parsing csv to kafka in logstash

I am trying to send csv data to kafka using LogStash implementing my own configuration script named test.conf.
I got this error while parsing.
Using JAVA_HOME defined java: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.262.b10-0.el7_8.x86_64
WARNING, using JAVA_HOME while Logstash distribution comes with a bundled JDK
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[INFO ] 2021-05-24 19:12:08.565 [main] runner - Starting Logstash {"logstash.version"=>"7.10.0", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 25.262-b10 on 1.8.0_262-b10 +indy +jit [linux-x86_64]"}
[FATAL] 2021-05-24 19:12:08.616 [main] runner - An unexpected error occurred! {:error=>#<ArgumentError: Path "/usr/share/logstash/data" must be a writable directory. It is not writable.>, :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/settings.rb:530:in `validate'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:290:in `validate_value'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:201:in `block in validate_all'", "org/jruby/RubyHash.java:1415:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:200:in `validate_all'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:317:in `execute'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/clamp-0.6.5/lib/clamp/command.rb:67:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:273:in `run'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/clamp-0.6.5/lib/clamp/command.rb:132:in `run'", "/usr/share/logstash/lib/bootstrap/environment.rb:88:in `<main>'"]}
[ERROR] 2021-05-24 19:12:08.623 [main] Logstash - java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
This is the command used to run logstash.
/usr/share/logstash/bin/logstash -f test.conf
Here is the config file.
input {
file {
path => "/home/data/*.csv"
start_position =>"beginning"
sincedb_path => "/dev/null"
}
}
filter {
mutate {
add_field => {
"timestamp" => "%{Date} %{Time}"
}
}
date { match => ["timestamp", "dd-MM-YYYY HH:mm:ss"]}
csv {
remove_field => ["Date", "Time"]
}
grok {
match => { "message" => [
"^%{DATE:timestamp},%{NUMBER:ab},%{NUMBER:cd},%{NUMBER:ef},%{NUMBER:gh},%{NUMBER:ij},%{NUMBER:kl},%{NUMBER:mn},%{NUMBER:op},%{NUMBER:qr},%{NUMBER:st},%{NUMBER:uv},%{NUMBER:wx},%{NUMBER:yz}$"
]
}
}
}
output {
stdout { codec => rubydebug }
if "_grokparsefailure" not in [tags] {
kafka {
codec => "json"
topic_id => "abcd1234"
bootstrap_servers => "192.16.12.119:9092"
}
}
}
Please help me with this.
First of all make sure about the the availability of your server ip with given port(192.16.12.119) with "telnet 192.16.12.119 9092 .
after that you forgot one field in in Kafka output section, Add the group_id field in your output Kafka section such as
Kafka{group_id => "35834"
topics => ["Your topic name"]
bootstrap_server => "192.16.12.199:9092"
codec => json}
If it doesn't worked again then change you bootstrap as "advertise type" look like below
bootstrap.servers => "advertised.listeners=PLAINTEXT://192.16.12.199:9092"
at the end do not metion you server or system ip in internet, It's not safe ;)
Your config is not even being loaded, you have a FATAL error when starting logstash.
[FATAL] 2021-05-24 19:12:08.616 [main] runner - An unexpected error occurred! {:error=>#<ArgumentError: Path "/usr/share/logstash/data" must be a writable directory. It is not writable.>,
The user that you are using to run logstash does not have permissions to write in this directory, it needs permission to write to the path.data directory or logstash won't start.
Your logstash.yml file also is not being loaded.
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
You need first to give permissions to the user running logstash to write into the path.data, you can change the path.data in the logstash.yml file, then you can pass the path to that file in the command line.
Considering that you installed logstash using a package manager like yum or apt, your logstash.yml file will be in the directory /etc/logstash/.
So you need to run logstash this way:
/usr/share/logstash/bin/logstash -f /path/to/your/config.conf --path.settings /etc/logstash/.
In the logstash.yml you need to set path.data to a directory where the user has permissions to write.
path.data: /path/to/writable/directory

Logstash 'com.mysql.jdbc.Driver' not loaded

I have a problem with jdbc_driver_library. I'm using ELK_VERSION = 6.4.2 and I use Docker for ELK.
When I run:
/opt/logstash# bin/logstash -f /etc/logstash/conf.d/mysql.conf
I'm getting an error:
error: com.mysql.jdbc.Driver not loaded. Are you sure you've included the correct jdbc driver in :jdbc_driver_library?
Driver path:
root#xxxxxxx:/etc/logstash/conectors# ls
mysql-connector-java-8.0.12.jar
root#xxxxxxxxxx:/etc/logstash/conectors#
mysql.conf:
input {
jdbc {
jdbc_driver_library => "/etc/logstash/conectors/mysql-connector-java-8.0.12.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://localhost:3306/mydb"
jdbc_user => "demouser"
jdbc_password => "demopassword"
statement => "SELECT id,name,city from ads"
}
}
output {
stdout { codec => rubydebug }
elasticsearch {
index => 'test'
document_type => 'tes'
document_id => '%{id}'
hosts => ['http://localhost:9200']
}
}
The whole error:
root#xxxxx:/opt/logstash# bin/logstash -f /etc/logstash/conf.d/mysql.conf
Sending Logstash logs to /opt/logstash/logs which is now configured via log4j2.properties
[2018-11-10T09:03:22,081][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-11-10T09:03:23,628][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.4.2"}
[2018-11-10T09:03:30,482][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-11-10T09:03:31,479][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2018-11-10T09:03:31,928][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2018-11-10T09:03:32,067][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-11-10T09:03:32,076][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-11-10T09:03:32,154][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://localhost:9200"]}
[2018-11-10T09:03:32,210][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-11-10T09:03:32,267][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-11-10T09:03:32,760][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x202f727c run>"}
[2018-11-10T09:03:32,980][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2018-11-10T09:03:33,877][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2018-11-10T09:03:34,315][ERROR][logstash.pipeline ] A plugin had an unrecoverable error. Will restart this plugin.
Pipeline_id:main
Plugin: <LogStash::Inputs::Jdbc jdbc_user=>"demouser", jdbc_password=><password>, statement=>"SELECT id,name,city from ads", jdbc_driver_library=>"/etc/logstash/conectors/mysql-connector-java-8.0.12.jar", jdbc_connection_string=>"jdbc:mysql://localhost:3306/mydb", id=>"233c4411c2434e93444c3f59eb9503f3a75cab4f85b0a947d96fa6773dac56cd", jdbc_driver_class=>"com.mysql.jdbc.Driver", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_cf5ab80c-91e4-4bc4-8d20-8c5a0f9f8077", enable_metric=>true, charset=>"UTF-8">, jdbc_paging_enabled=>false, jdbc_page_size=>100000, jdbc_validate_connection=>false, jdbc_validation_timeout=>3600, jdbc_pool_timeout=>5, sql_log_level=>"info", connection_retry_attempts=>1, connection_retry_attempts_wait_time=>0.5, parameters=>{"sql_last_value"=>1970-01-01 00:00:00 +0000}, last_run_metadata_path=>"/root/.logstash_jdbc_last_run", use_column_value=>false, tracking_column_type=>"numeric", clean_run=>false, record_last_run=>true, lowercase_column_names=>true>
Error: com.mysql.jdbc.Driver not loaded. Are you sure you've included the correct jdbc driver in :jdbc_driver_library?
Exception: LogStash::ConfigurationError
Stack: /opt/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.13/lib/logstash/plugin_mixins/jdbc/jdbc.rb:163:in `open_jdbc_connection'
/opt/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.13/lib/logstash/plugin_mixins/jdbc/jdbc.rb:221:in `execute_statement'
/opt/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.13/lib/logstash/inputs/jdbc.rb:277:in `execute_query'
/opt/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.13/lib/logstash/inputs/jdbc.rb:263:in `run'
/opt/logstash/logstash-core/lib/logstash/pipeline.rb:409:in `inputworker'
/opt/logstash/logstash-core/lib/logstash/pipeline.rb:403:in `block in start_input'
When I build an image and use docker run, I get another error:
[2018-11-10T10:32:52,935][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/opt/logstash/data/queue"}
[2018-11-10T10:32:52,966][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/opt/logstash/data/dead_letter_queue"}
[2018-11-10T10:32:54,509][ERROR][org.logstash.Logstash ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
Same problem when I use PostgreSQL.
psql.conf
input {
jdbc {
type => 'test'
jdbc_driver_library => '/etc/logstash/postgresql-9.1-901-1.jdbc4.jar'
jdbc_driver_class => 'org.postgresql.Driver'
jdbc_connection_string => 'jdbc:postgresql://localhost:5432/mytestdb'
jdbc_user => 'postgres'
jdbc_password => 'xxxxxx'
jdbc_page_size => '50000'
statement => 'SELECT id, name, city FROM ads'
}
}
Then I run:
/opt/logstash# bin/logstash -f /etc/logstash/conf.d/psql.conf
Error:
error: org.postgresql.Driver not loaded. Are you sure you've included the correct jdbc driver in :jdbc_driver_library?
I got the same issue and the bellow solution fixed my issue .
for logstash 6.2.x and above, add the required drivers under:
logstash_install_dir/logstash-core/lib/jars/
and don't provide any driver path in config file.
I solved the problem:
First check your java version:
root#xxxxxx:/# java -version
openjdk version "1.8.0_181"
If you are using 1.8 then you should use the JDBC42 version.
If you are using 1.7 then you should use the JDBC41 version.
If you are using 1.6 then you should use the JDBC43 version.
Postgres setup:
postgresql-9.4-1203.jdbc42.jar
jdbc_driver_library => '/path_to_jar/postgresql-9.4-1203.jdbc42.jar'
jdbc_driver_class => 'org.postgresql.Driver'
MySQL setup:
mysql-connector-java-5.1.46.jar
jdbc_driver_library => "//path_to_jar/mysql-connector-java-5.1.46.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
In MySQL 8 that you're using, the JDBC driver was renamed from com.mysql.jdbc.Driver to com.mysql.cj.jdbc.Driver (see the release notes for details). Just update your jdbc_driver_class configuration and you should be OK.
I had a similar issue, though, I had a different setting: I'm using a virtual machine not a Docker image. The issue was solved by installing OpenJDK 8 and setting it as the Default Java Version on my Ubuntu Server Virtual Machine.
https://linuxize.com/post/install-java-on-ubuntu-18-04/
Hope this helps!
EDIT : And before that, I had to change the authentication method of the root user from auth_socket to mysql_native_password
https://www.digitalocean.com/community/tutorials/how-to-install-mysql-on-ubuntu-18-04

Parse Dashboard configuration error "Your config file contains invalid JSON. Exiting."

I am trying to install Parse Dashboard on AWS. The public directory works but the /apps directory is blank.
When looking at the logs I see
> parse-dashboard#1.0.14 start /var/app/current
> node ./Parse-Dashboard/index.js
Your config file contains invalid JSON. Exiting.
I am deploying the parse-dashboard from github. And I have entered in the values in the parse-dashboard-config.json that match the keys on parse.com.
This is the JSON that I am using
{
"apps": [
{
"serverURL": "xxxxxxxxxxxxxxxxxxxxxxx/parse",
"appId": "xxxxxxxxxxxxxxxxxxxxxxxxxxx",
"masterKey": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"appName": "xxxxxxxxxxxxxx"
}
],
"iconsFolder": "icons"
}
In index.js the log is being generated by
129 if (error instanceof SyntaxError) {
130 console.log('Your config file contains invalid JSON. Exiting.');
131 process.exit(1);
132 }
For anyone who's looking for an answer, the parse-dashboard-config.json file should be in the your current folder where you're running
parse-dashboard --config parse-dashboard-config.json

Resources