bind postgres-xl coordinator to listen only on specific interface - postgres-xl

I'm trying to bind coordinator to only listen on specific interface. It seems the listen_addressses option is being ignored when specified within postgresql.conf on coordinator.
There is another file within coordinator data directory called postmaster.opts which seems to be reset each time pgxc_ctl connects to coordinator in order to start/stop it.
If I could include -h option within that file then coordinator would be bound to IP of my choice.
Interestingly GTM will bind to IP if specified within listen_addresses. Coordinator nor data node won't.
I looked through postgres-xl documentation, pgxc_ctl documentation, mailing list and google in general but could not find how to do this. My last resort is to read through the code base (which I'm trying to do now).
--- edit 1:
It seems listen_addressses is honoured as long as postgres is not started with -i switch. For some reason pgxc_ctl adds -i to list of call parameters within postmaster.opts when on coordinator.
--- edit 2:
It seems -i option is added in the source code when starting coordinator (and data node). So the only way to move forward is to patch and rebuild from sources.
--- edit 3:
I have been testing pgxc_ctl when built with changes made to datanode_cmd.c as well as coord_cmd.c (removed -o -i switches from all calls to pg_ctl). By doing this both coordinator and data node are now binding to interface as stated within listen_addresses in postgresql.conf. It would be interesting to understand why -o -i was hardcoded. Problem solved.

Related

Configure Apache Solr logging to show warnings and slow queries via global config file

I start Solr in the foreground like so C:\solr-8.10.1\bin\solr start -p 8983 -m 1536m -f -v
It shows a command window and it logs a massive amount of DEBUG info, which I don't need.
I want to reduce the amount of logging here, and I found this: https://solr.apache.org/guide/8_5/configuring-logging.html
This seems exactly like what I need for my scenario:
I have many cores, each with their own solrconfig.xml:
C:\solr-8.10.1\server\solr\core1
C:\solr-8.10.1\server\solr\core2
C:\solr-8.10.1\server\solr\core3
C:\solr-8.10.1\server\solr\coreX
I don't want to have to make the logging changes to each core separately but 1 global setting that applies to all
I don't use Solr API, I want to be able to change settings via config files
I want ERRORS to be logged, and also any slow queries
After reading the tutorial then I decided I need to:
start Solr using solr start -p 8983 -m 1536m -f -q
Need to add an element <slowQueryThresholdMillis>1000</slowQueryThresholdMillis>
However, it's that last part where I have questions. I see a reference made to so called configsets, but I have no idea if that's the place where I need to configure my global settings.
I inspected the sample files, e.g. \solr-8.10.1\server\solr\configsets\sample_techproducts_configs\conf\solrconfig.xml
But I can't figure out if that's the right config file or how it would even apply to all other cores without any reference to the other cores.
I've had a look at these already, but they seem to want to handle things via code, whereas I'm looking for a file configuration:
configure Logger via global config file
Use of readConfiguration method in logging activities

I have a problem on sending Downstream to a gateway

I'm using a dragino DLOS8 gateway and a dragino end node lt-22222-l. I wrote a script to read and show the values in my end node's inputs but I couldn't control my relays. I found an example of a script (in a dragino article titled Communication with ABP End Node) showing this function to control them( it controlled the digital outputs but I changed it to relays) which is:
echo "${DEV_2},imme,hex,030101" > /var/iot/push/down
I even tried with more specified one:
echo "${DEV_1},imme,hex,030101,20,1,SF12,869525000,1" > /var/iot/push/down
in the article it indicates that I have to create a file in the directory /var/iot/push for downstream purpose. I tried using winscp and the command touch down but it deleted few seconds after. if there is anyone that used those devices or knows about this please help me.
I had a similar problem with Dragino, also with device logfiles in /var/iot/channels directory. Got information from Dragino support that those files are "consumed" by MQTT and TCP processes, so are periodically deleted: I undestand that LoRaWAN or MQTT or TCP application has to work with those files as soon as they are generated.
Notice that "imme" sends downstream immediately to C type devices, maybe "time" (downstream after receiving data from node) is better for your application.

Mark standalone redis as read-only

I want to mark a standalone Redis server (not a Redis-Cluster, not a Redis-Sentinel) as read-only. I have been googling for this for quite sometime but I don't seem to find a definite answer (Almost all answers point to Clustering or Sentinel). I was looking out for some config modification (CONFIG SET something).
NOTE: config set replica-read-only yes does not make the current redis-server read-only, but only its replicas.
My use-case basically is I am doing a migration wherein at some point I want to make the redis-server read-only. My application code can handle failures whenever a write call happens so that's not an issue.
Also, if this is not directly possible from redis server, is there something that I can do in the client code that'll have the same effect (I am using redis-py as the client library)? (Although this is less than ideal)
Things that I've tried
Played around with config set replica-read-only yes and other configs. They don't seem to be applying the current redis-server.
Tried marking a redis-server as a replica of itself (This was illogical, but just wanted to see if this worked), but turns out it deleted all keys in my local redis, so not something I can do.
Once the writes are done and you want to switch the node to read-only, couple of ways to do that:
Modify the redis.conf to have "min-replicas-to-write 3". Since you don't have 3 replicas your node will stop accepting writes but will continue to serve reads, as shown below:
However, please note that after modifying redis.conf, you will have to restart your redis node for the changes to take effect.
Another way is when you want to switch to readonly mode, at that time you create a replica and let it sync with the master and then kill the master node. Then replica will exist as read only.
There're several solution you can try:
You can use the rename-command config to disable write commands. If you only want to disable small number of commands, that's a good solution. However, since there're too many write commands, you might need to have too many configuration, and easy to miss some of them.
If you're using Redis 6.0, you can use Redis ACL to disable write commands for specific users.
You can setup a read-only Redis replica for your master, and ask clients to read from the replica.

Hosts File for Greenplum Installation

I am setting up greenplum 3 node cluster for POC while checking installation steps I found that hostfile_exkeys file have to be in master node.
Can anyone tell me where I should create this file location, node etc?
And most important what to put in this?
You create hostfile_exkeys on the Master. It isn't needed on the other hosts. You can put it in /home/gpadmin or anywhere that is convenient for you.
You put the three hostnames for your POC in this file. Example:
mdw
sdw1
sdw2
This is documented pretty well here: https://gpdb.docs.pivotal.io/5120/install_guide/prep_os_install_gpdb.html
You can also run a POC in the cloud. Greenplum is available in AWS, Azure, and GCP. It does all of the configuration for you. You can even use the BYOL product listings for 90 days for free to evaluate the product or you can use the Hourly billed products to get support while you evaluate the product.
There are examples in the utililty reference for gpssh-exkeys documentation but, in general, you should put in all the hostnames in your cluster. If there a multiple network-interfaces, those can go in instead.
I generally put this file either in /home/gpadmin or /home/gpadmin/gpconfigs (good place to keep all files for initial setup and initialization).
Your file will look something like (one name per line):
mdw
sdw1
sdw2
If there are 2 network interfaces, it might look something like:
mdw
mdw-1
mdw-2
sdw1
sdw1-1
sdw1-2
sdw2
sdw2-1
sdw2-2
Your /etc/hosts file (on all server) should include the IP addresses for all the interfaces and their names, so this file should match those names listed in /etc/hosts.
This is primarily to allow the master to exchange ssh keys with all hosts so it is always password-less login to the hosts. After you have this file set up, you will run (example):
gpssh-exkeys -f /home/gpadmin/gpconfigs/yourhostfilename
I hope this helps.

nifi Could not find Process Group with ID

After installing nifi, I am trying to create a flow to test hdfs-nifi connectivity. But I am getting following error continuously for every click on the dashboard.
I am the root. so I have complete access to the components.
Did you copy a flow.xml.gz file from an existing instance of NiFi, or have controller services, reporting tasks, or other components which reference a group that no longer exists?
Try searching for the UUID using the search bar in the top right, or shut down NiFi, and use the following terminal commands to look for any references to this process group ID (double check or copy/paste the UUID because I typed it from looking at your screenshot):
cd $NIFI_HOME
gunzip -k conf/flow.xml.gz
grep '1861df7a-0168-1000-3931-028d9eb92cbd' conf/flow.xml
You should remove the referenced process group (you can back up the flow.xml.gz first if you are concerned about data loss (this would be the flow definition, not any flowfiles, content, or provenance data).

Resources