Spring Mongo Log4j customize - spring

How can I customize the Spring log4j output into the Mongo datastore?
I was able to follow the Spring's example on how to use MongoLog4j. The logs are being persisted into mongodb but whatever is in my conversion pattern is not respected. My desire is to store the line number in the log message.
Here's my log4j property file
log4j.rootCategory=INFO, stdout
log4j.appender.stdout=org.springframework.data.mongodb.log4j.MongoLog4jAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d %p [%c] [%L] - <%m>%n
log4j.appender.stdout.host = localhost
log4j.appender.stdout.port = 27017
log4j.appender.stdout.database = prod
log4j.appender.stdout.collectionPattern = logs
log4j.appender.stdout.applicationId = horizon
log4j.appender.stdout.warnOrHigherWriteConcern = FSYNC_SAFE
log4j.category.org.springframework.batch=DEBUG
log4j.category.org.springframework.data.document.mongodb=DEBUG
log4j.category.org.springframework.transaction=INFO
Below is what is being stored in Mongo.
{ "_id" : ObjectId("4f720482788d6140dacb0270"), "applicationId" : "test", "na
me" : "com.service.MongoTest", "level" : "DEBUG", "timestamp
" : ISODate("2012-03-27T18:18:42.981Z"), "properties" : { "applicationId" : "test" }, "message" : "Debug TEST3" }

Looking at Spring's source code, it doesn't seem to be implemented. Instead I found another project that has line numbers and custom conversion patterns implemented. The project is
http://log4mongo.org/

Related

Apache Superset oauth2 with custom Spring-Security OAuth2 server

I am using Apache Superset and trying to configure its OAuth2 capability to connect to my (custom) Spring-Security OAuth2 server. Unfortunately, it ain't working right now. The stack track begins with this.
15:09:16.584 [qtp1885996206-21] ERROR org.springframework.boot.web.support.ErrorPageFilter - Forwarding to error page from request [/oauth/authorize] due to exception [Could not resolve view with name 'forward:/oauth/confirm_access' in servlet with name 'dispatcherServlet'] javax.servlet.ServletException: Could not resolve view with name 'forward:/oauth/confirm_access' in servlet with name 'dispatcherServlet' at org.springframework.web.servlet.DispatcherServlet.render(DispatcherServlet.java:1262) ~[spring-webmvc-4.3.8.RELEASE.jar:4.3.8.RELEASE] at
Here is the relevant portion of my config.py from Superset.
AUTH_TYPE = AUTH_OAUTH
OAUTH_PROVIDERS = [
{
"name" : "MY-OAUTH",
"icon" : APP_ICON,
"token_key" : "password",
"remote_app" : {
"consumer_key" : "my_dashboard",
"consumer_secret" : "my_secret",
"base_url" : "http://localhost:8088/myoauth",
"request_token_params" : {
"scope": "my_dashboard read write",
"grant_type" : "password"
},
"request_token_url" : None,
"access_token_url" : "http://localhost:8088/myoauth/oauth/token",
"access_token_params" : {
"scope": "my_dashboard read write",
"grant_type" : "password",
"response_type" : "authorization_code"
},
"access_token_method" : "POST",
"authorize_url" : "http://localhost:8088/myoauth/oauth/authorize"
}
}
]
A nice gentleman suggested that I have somehow disabled the servlet handler for /oauth/confirm_access, but I am not sure how to check on that or fix such a problem.
Do you know what is going on here, what I can do to fix this or where I can start looking?
Thanks,
Matt

Why are my Spring Data repositories not found if moved into another package of a Spring Boot application?

I created a new project in Eclipse from these helpful spring examples (Import Getting Started Content). It is called "gs-accessing-data-rest-complete"
Reference and full Code can be found: spring-guides/gs-accessing-data-rest
When leaving the example unchanged, except using WAR instead of JAR packaging, everything works well. When calling $ curl http://localhost:8080/, I'll get an exposure of usable resources.
$ curl http://localhost:8080/
{
"_links" : {
"people" : {
"href" : "http://localhost:8080/name{?page,size,sort}",
"templated" : true
},
"profile" : {
"href" : "http://localhost:8080/alps"
}
}
But when moving the PersonRepository into another package, e.g. myRepos via Eclipse's Refactor-->Move command, a resource is not accessible anymore.
The response from curl is then:
$ curl http://localhost:8080/
{
"_links" : {
"profile" : {
"href" : "http://localhost:8080/alps"
}
}
}
As far as I understood, Spring scans for Repositories automatically. Because the main class uses #SpringBootApplication annotation, everything should be found by spring itself.
What am I missing? Do I have to add some special XML configuration file or add another Configuration Class somewhere? Or do I have to update application.properties in order to sth.?
Perhaps somebody has some useful experiences, she or he might share with me. Thank you.
Try specifying the base package to use when scanning for repositories by using this annotation on your config class: #EnableJpaRepositories(basePackages = "your.base.repository.package")

Logstash just gives me two logs

←[33mUsing milestone 2 input plugin 'file'. This plugin should be stable, but if
you see strange behavior, please let us know! For more information on plugin mi
lestones, see http://logstash.net/docs/1.4.2/plugin-milestones {:level=>:warn}←[
0m
←[33mUsing milestone 2 filter plugin 'csv'. This plugin should be stable, but if
you see strange behavior, please let us know! For more information on plugin mi
lestones, see http://logstash.net/docs/1.4.2/plugin-milestones {:level=>:warn}←[
0m
My configuration :
input {
file {
path => [ "e:\mycsvfile.csv" ]
start_position => "beginning"
}
}
filter {
csv {
columns => ["col1","col2"]
source => "csv_data"
separator => ","
}
}
output {
elasticsearch {
host => localhost
port => 9200
index => test
index_type => test_type
protocol => http
}
stdout {
codec => rubydebug
}
}
My environment:
Windows 8
logstash 1.4.2
Question: Has anyone experienced this before? Where do the logstash logs go? Are there known logstash bugs on windows? My experience is that logstash does not do anything.
I tried:
logstash.bat agent -f test.conf --verbose
←[33mUsing milestone 2 input plugin 'file'. This plugin should be stable, but if
you see strange behavior, please let us know! For more information on plugin mi
lestones, see http://logstash.net/docs/1.4.2/plugin-milestones {:level=>:warn}←[
0m
←[33mUsing milestone 2 filter plugin 'csv'. This plugin should be stable, but if
you see strange behavior, please let us know! For more information on plugin mi
lestones, see http://logstash.net/docs/1.4.2/plugin-milestones {:level=>:warn}←[
0m
←[32mRegistering file input {:path=>["e:/temp.csv"], :level=>:info}←[0m
←[32mNo sincedb_path set, generating one based on the file path {:sincedb_path=>
"C:\Users\gemini/.sincedb_d8e46c18292a898ea0b5b1cd94987f21", :path=>["e:/tem
p.csv"], :level=>:info}←[0m
←[32mPipeline started {:level=>:info}←[0m
←[32mNew Elasticsearch output {:cluster=>nil, :host=>"localhost", :port=>9200, :
embedded=>false, :protocol=>"http", :level=>:info}←[0m
←[32mAutomatic template management enabled {:manage_template=>"true", :level=>:i
nfo}←[0m
←[32mUsing mapping template {:template=>"{ \"template\" : \"logstash-\", \"se
ttings\" : { \"index.refresh_interval\" : \"5s\" }, \"mappings\" : { \"_
default_\" : { \"_all\" : {\"enabled\" : true}, \"dynamic_templates\
" : [ { \"string_fields\" : { \"match\" : \"\", \"m
atch_mapping_type\" : \"string\", \"mapping\" : { \"type\"
: \"string\", \"index\" : \"analyzed\", \"omit_norms\" : true, \"
fields\" : { \"raw\" : {\"type\": \"string\", \"index\" : \"not_
analyzed\", \"ignore_above\" : 256} } } }
} ], \"properties\" : { \"#version\": { \"type\": \"string\", \"in
dex\": \"not_analyzed\" }, \"geoip\" : { \"type\" : \"object\
", \"dynamic\": true, \"path\": \"full\", \"
properties\" : { \"location\" : { \"type\" : \"geo_point\" }
} } } } }}", :level=>:info}←[0m
It stays like this for a while and no new index is created in elasticsearch.
I had to add:
sincedb_path => "NIL"
and it worked.
http://logstash.net/docs/1.1.0/inputs/file#setting_sincedb_path
sincedb_path Value type is string There is no default value for this
setting. Where to write the since database (keeps track of the current
position of monitored log files). Defaults to the value of environment
variable "$SINCEDB_PATH" or "$HOME/.sincedb".
I've had several sincedb files generated in my C:\users{user}.
While using CSV as the input data I had to add:
sincedb_path => "NIL" inside the file{} json
Example :
input {
file {
path => [ "C:/csvfilename.txt"]
start_position => "beginning"
sincedb_path => "NIL"
}
}
and it worked for logstash version 1.4.2

Call Spring batch jobs on remote server

I use Spring Batch Admin to manage and monitor jobs and executions. How can I call a job and launch it from a standalone java application with given HTTP Connection to the server containing Spring Batch Admin WebAPP.
Thank you for any help
You can use Spring Batch Admin JSON API to do so - it is possible to list jobs as well as to run them. Additionally, you can expose JMX beans to monitor and manage batch jobs remotely.
Below is an example of json POST request to the job service launching job named 'job1':
$ curl -d jobParameters=fail=false http://localhost:8080/spring-batch-admin-sample/batch/jobs/job1.json
{"jobExecution" : {
"resource" : "http://localhost:8080/spring-batch-admin-sample/batch/jobs/executions/2.json",
"id" : "2",
"status" : "STARTING",
"startTime" : "",
"duration" : "",
"exitCode" : "UNKNOWN",
"exitDescription" : "",
"jobInstance" : { "resource" : "http://localhost:8080/spring-batch-admin-sample/batch/jobs/job1/1.json" },
"stepExecutions" : {
}
}
}
you can use simply HttpURLConnection with the JOB url and its parameter.
the URL construct would be like
"http://:8080/spring-batch-admin-sample/batch/jobs/yourJob?jobParameters=" + URLEncoder.encode("param1=value,param2=value2", UTF-8)
let me know, for any clarifications.....

How do I pass UserData to a Beanstalk instance with CloudFormation

I need the application server, which is beanstalk instances, to do some actions upon startup and I thought of running a bash script passed to the instance with the UserData property which is available to regular EC2 instances.
I've found several example CloudFormation templates which does this with regular EC2 instances, but no example with Beanstalk. I've tried to add this to the properties field for the application:
"MyApp" : {
"Type" : "AWS::ElasticBeanstalk::Application",
"Properties" : {
"Description" : "MyApp description",
"ApplicationVersions" : [{
...
}],
"UserData" : {
"Fn::Base64" : { "Fn::Join" : ["", [
"#!/bin/bash\n",
"touch /tmp/userdata_sucess\n"
]]
}},
...
I also tried to add to the environment part:
"MyAppEnv" : {
"Type" : "AWS::ElasticBeanstalk::Environment",
"Properties" : {
"ApplicationName" : { "Ref" : "MyApp" },
"Description" : "MyApp environment description",
"UserData" : {
"Fn::Base64" : { "Fn::Join" : ["", [
"#!/bin/bash\n",
"touch /tmp/userdata_sucess\n"
]]
}},
"TemplateName" : "MyAppConfiguration",
"VersionLabel" : "First Cloud version"
}
},
In both cases this resulted in failure when trying to create the stack. Does anyone know if it is possible to pass UserData to a Beanstalk instance using CloudFormation. If so - can you provide an example.
If you want to have all the advantages that Beanstalk offers - like not having to patch the OS which Amazon does for you - this isn't possible. One option is to create a custom AMI where you include the needed scripts, but then you must manage the OS yourself with security patches. Read more here.
You can do this with .ebextensions, see Amazon docs.
An example:
packages:
yum:
bison: []
libpcap-devel: []
libpcap: "1.4.0"
golang: "1.13.4"
git: []
commands:
20_show_info_pkgs:
env:
GOPATH: /usr/local/gocode
PATH: $PATH:/sbin:/bin:/usr/sbin:/usr/bin:/opt/aws/bin:/usr/local/bin
ignoreErrors: true
command: |
ls -l /usr/local /usr/local/g*
env
yum list bison libpcap-devel libpcap golang git
which git
which go
git --version
go version
goreplay version
true

Resources