Enable Apache Felix Web Console to show more than 100 logs - osgi

We are using the Apache Felix Web Console to display active bundles, configurations and logs. However, it would be very nice to be able to display more than just the default 100 log entries. Is there a way to configure the console in such a way? I didn't find anything in the official documentation, but this is such a basic requirement that I would guess that there is a solution for this somewhere. Am I missing something?

This cap is actually not in the web console but in the Log Service itself, which is limited to 100 entries by default. See: http://felix.apache.org/documentation/subprojects/apache-felix-log.html
To increase this to, say, 200 you can set the following system property:
-Dorg.apache.felix.log.maxSize=200
If you set maxSize to -1 then you will get an unlimited size log, but beware that it will then grow until you run out of memory.

Related

Configure Apache Solr logging to show warnings and slow queries via global config file

I start Solr in the foreground like so C:\solr-8.10.1\bin\solr start -p 8983 -m 1536m -f -v
It shows a command window and it logs a massive amount of DEBUG info, which I don't need.
I want to reduce the amount of logging here, and I found this: https://solr.apache.org/guide/8_5/configuring-logging.html
This seems exactly like what I need for my scenario:
I have many cores, each with their own solrconfig.xml:
C:\solr-8.10.1\server\solr\core1
C:\solr-8.10.1\server\solr\core2
C:\solr-8.10.1\server\solr\core3
C:\solr-8.10.1\server\solr\coreX
I don't want to have to make the logging changes to each core separately but 1 global setting that applies to all
I don't use Solr API, I want to be able to change settings via config files
I want ERRORS to be logged, and also any slow queries
After reading the tutorial then I decided I need to:
start Solr using solr start -p 8983 -m 1536m -f -q
Need to add an element <slowQueryThresholdMillis>1000</slowQueryThresholdMillis>
However, it's that last part where I have questions. I see a reference made to so called configsets, but I have no idea if that's the place where I need to configure my global settings.
I inspected the sample files, e.g. \solr-8.10.1\server\solr\configsets\sample_techproducts_configs\conf\solrconfig.xml
But I can't figure out if that's the right config file or how it would even apply to all other cores without any reference to the other cores.
I've had a look at these already, but they seem to want to handle things via code, whereas I'm looking for a file configuration:
configure Logger via global config file
Use of readConfiguration method in logging activities

Daily rolling log file in Websphere Liberty (16.0.0.4-WS-LIBERTY-CORE)

How to create a daily rolling log file in Websphere Liberty? I want the name of the log file to have YYYYMMDD format.
Currently I'm only able to limit the max file size, max file and a static naming of messages.log and disable consolelog.
<logging consoleLogLevel="OFF" maxFileSize="1" maxFiles="3" messageFileName="loggingMessages.log"/>
https://www.ibm.com/support/knowledgecenter/SSEQTP_8.5.5/com.ibm.websphere.wlp.doc/ae/rwlp_logging.html
WebSphere Liberty does not currently have the ability to schedule log file rotation like traditional WAS. You can request this feature using the RFE site.
Alternatively, you could use an approach like Bruce mentioned - perhaps using a cron job to restart the server at midnight.
You might also consider configuring Liberty's binary logging. This will create a binary log file that can be queried to produce actual log files (with filtering options, etc.). It does have some time-based options. More info here.
Hope this helps, Andy
Probably not the answer you want, but if you restart the server it will roll the log.

Logging for two different environment logs in to a single log file

I am quite new for log4j2 logger and my requirement to write a log from application server and web server.
I am having two different environment on which J BOSS server is deployed.
Now I am having a log file on web server environment which is writing logs for errors and I want to write logs from application server also in same file.
Please suggest.
If you want the logs to be integrated together you should use a solution like Splunk or Elastic Search/Logstash/Kibana (ELK).
When you try to write to a file from 2 different processes your file will get corrupted unless you use file locking. However, your throughput will decrease significantly and it isn't supported for rolling files. So the best approach is to send the logs to a single process where they can be aggregated.

Check # of JDBC Connections in Use

My application has been setup with jdbc connection pool in JBoss, is there any way I can check the # of connections currently in use while running a particular method?
<min-pool-size>10</min-pool-size>
<max-pool-size>50</max-pool-size>
I am assuming you are using JBoss AS7.
I am not sure if I understand what you mean by "running a particular method". You can run the following CLI command to get data source run time statistics assuming you have statistics enabled on the particular datasource:
/subsystem=datasources/data-source=YourDataSource:read-resource(include-runtime=true,recursive=true)
You will get a number of metrics returned, the one that will tell you current number of connections is the attribute ActiveCount. The MaxUsedCount is also a good metric to observe since that will tell what was the maximum # of connections that was checked out of the pool so you can determine if your max-pool-size is configured properly for your load requirements.
Alternatively, you can get these stats from JConsole as well by running the jconsole.sh (linux/unix) or jconsole.bat (windows) scripts from the bin directory under JBoss installation.

How does one run Spring XD in distributed mode?

I'm looking to start Spring XD in distributed mode (more specifically deploying it with BOSH). How does the admin component communicate to the module container?
If it's via TCP/HTTP, surely I'll have to tell the admin component where all the containers are? If it's via Redis, I would've thought that I'll need to tell the containers where the Redis instance is?
Update
I've tried running xd-admin and Redis on one box, and xd-container on another with redis.properties updated to point to the admin box. The container starts without reporting any exceptions.
Running the example stream submission curl -d "time | log" http://{admin IP}:8080/streams/ticktock yields no output to either console, and not output to the logs.
If you are using the xd-container script, then the redis.properties is expected to be under "XD_HOME/config" where XD_HOME points the base directory where you have bin, config, lib & modules of xd.
Communication between the Admin and Container runtime components is via the messaging bus, which by default is Redis.
Make sure the environment variable XD_HOME is set as per the documentation; if it is not you will see a logging message that suggests the properties file has been loaded correctly when it has not:
13/06/24 09:20:35 INFO support.PropertySourcesPlaceholderConfigurer: Loading properties file from URL [file:../config/redis.properties]

Resources