Configure Apache Solr logging to show warnings and slow queries via global config file - windows

I start Solr in the foreground like so C:\solr-8.10.1\bin\solr start -p 8983 -m 1536m -f -v
It shows a command window and it logs a massive amount of DEBUG info, which I don't need.
I want to reduce the amount of logging here, and I found this: https://solr.apache.org/guide/8_5/configuring-logging.html
This seems exactly like what I need for my scenario:
I have many cores, each with their own solrconfig.xml:
C:\solr-8.10.1\server\solr\core1
C:\solr-8.10.1\server\solr\core2
C:\solr-8.10.1\server\solr\core3
C:\solr-8.10.1\server\solr\coreX
I don't want to have to make the logging changes to each core separately but 1 global setting that applies to all
I don't use Solr API, I want to be able to change settings via config files
I want ERRORS to be logged, and also any slow queries
After reading the tutorial then I decided I need to:
start Solr using solr start -p 8983 -m 1536m -f -q
Need to add an element <slowQueryThresholdMillis>1000</slowQueryThresholdMillis>
However, it's that last part where I have questions. I see a reference made to so called configsets, but I have no idea if that's the place where I need to configure my global settings.
I inspected the sample files, e.g. \solr-8.10.1\server\solr\configsets\sample_techproducts_configs\conf\solrconfig.xml
But I can't figure out if that's the right config file or how it would even apply to all other cores without any reference to the other cores.
I've had a look at these already, but they seem to want to handle things via code, whereas I'm looking for a file configuration:
configure Logger via global config file
Use of readConfiguration method in logging activities

Related

Query Wildfly for a value and then use that in a CLI script

I have an Ansible script to update and maintain my WildFly installation. One of my tasks in this setup is managing the MySQL-driver and in order to perform an update on that driver, I have to first disable the application that uses that driver, before I can replace the it and set up all my datasources anew.
My CLI script starts with the following lines:
if (outcome == success) of /deployment=my-app-1.1.ear:read-resource
deployment disable my-app-1.1.ear
end-if
My problem is that I am here very depending on the actual name of the application and that name can change over time since I have my version information in there.
I tried the following:
set foo=`ls /deployment`
deployment disable $foo
It did not work since when I look at foo I see that it was not my-app-1.1.ear but ["my-app-1.1.ear"] -- so I feel that I might be going in the right direction, even though I have not got it right.

Mark standalone redis as read-only

I want to mark a standalone Redis server (not a Redis-Cluster, not a Redis-Sentinel) as read-only. I have been googling for this for quite sometime but I don't seem to find a definite answer (Almost all answers point to Clustering or Sentinel). I was looking out for some config modification (CONFIG SET something).
NOTE: config set replica-read-only yes does not make the current redis-server read-only, but only its replicas.
My use-case basically is I am doing a migration wherein at some point I want to make the redis-server read-only. My application code can handle failures whenever a write call happens so that's not an issue.
Also, if this is not directly possible from redis server, is there something that I can do in the client code that'll have the same effect (I am using redis-py as the client library)? (Although this is less than ideal)
Things that I've tried
Played around with config set replica-read-only yes and other configs. They don't seem to be applying the current redis-server.
Tried marking a redis-server as a replica of itself (This was illogical, but just wanted to see if this worked), but turns out it deleted all keys in my local redis, so not something I can do.
Once the writes are done and you want to switch the node to read-only, couple of ways to do that:
Modify the redis.conf to have "min-replicas-to-write 3". Since you don't have 3 replicas your node will stop accepting writes but will continue to serve reads, as shown below:
However, please note that after modifying redis.conf, you will have to restart your redis node for the changes to take effect.
Another way is when you want to switch to readonly mode, at that time you create a replica and let it sync with the master and then kill the master node. Then replica will exist as read only.
There're several solution you can try:
You can use the rename-command config to disable write commands. If you only want to disable small number of commands, that's a good solution. However, since there're too many write commands, you might need to have too many configuration, and easy to miss some of them.
If you're using Redis 6.0, you can use Redis ACL to disable write commands for specific users.
You can setup a read-only Redis replica for your master, and ask clients to read from the replica.

How to run spark-jobs outside the bin folder of spark-2.1.1-bin-hadoop2.7

I have an existing spark-job, the functionality of this spark-job is to connect kafka-server get the data and then storing the data into cassandra tables, now this spark-job is running on server inside spark-2.1.1-bin-hadoop2.7/bin but whenever I am trying to run this spark-job from other location, Its not running, this spark-job contains some JavaRDD related code.
Is there any chance, I can run this spark-job from outside also by adding any dependency in pom or something else?
whenever I am trying to run this spark-job from other location, Its not running
spark-job is a custom launcher script for a Spark application, perhaps with some additional command-line options and packages. Open it, review the content and fix the issue.
If it's too hard to figure out what spark-job does and there's no one nearby to help you out, it's likely time to throw it away and replace with the good ol' spark-submit.
Why don't you use it in the first place?!
Read up on spark-submit in Submitting Applications.

Logging for two different environment logs in to a single log file

I am quite new for log4j2 logger and my requirement to write a log from application server and web server.
I am having two different environment on which J BOSS server is deployed.
Now I am having a log file on web server environment which is writing logs for errors and I want to write logs from application server also in same file.
Please suggest.
If you want the logs to be integrated together you should use a solution like Splunk or Elastic Search/Logstash/Kibana (ELK).
When you try to write to a file from 2 different processes your file will get corrupted unless you use file locking. However, your throughput will decrease significantly and it isn't supported for rolling files. So the best approach is to send the logs to a single process where they can be aggregated.

How does one run Spring XD in distributed mode?

I'm looking to start Spring XD in distributed mode (more specifically deploying it with BOSH). How does the admin component communicate to the module container?
If it's via TCP/HTTP, surely I'll have to tell the admin component where all the containers are? If it's via Redis, I would've thought that I'll need to tell the containers where the Redis instance is?
Update
I've tried running xd-admin and Redis on one box, and xd-container on another with redis.properties updated to point to the admin box. The container starts without reporting any exceptions.
Running the example stream submission curl -d "time | log" http://{admin IP}:8080/streams/ticktock yields no output to either console, and not output to the logs.
If you are using the xd-container script, then the redis.properties is expected to be under "XD_HOME/config" where XD_HOME points the base directory where you have bin, config, lib & modules of xd.
Communication between the Admin and Container runtime components is via the messaging bus, which by default is Redis.
Make sure the environment variable XD_HOME is set as per the documentation; if it is not you will see a logging message that suggests the properties file has been loaded correctly when it has not:
13/06/24 09:20:35 INFO support.PropertySourcesPlaceholderConfigurer: Loading properties file from URL [file:../config/redis.properties]

Resources