ActiveMQ Artemis HA & users/roles - am I supposed to create user/role on each node separately? - high-availability

I have ActiveMQ Artemis cluster (2 nodes) with active-backup HA (shared-store mode), 2.17.0.
Shared-store is setup with NFS, mounted on $ARTEMIS_INSTANCE/data. In broker.xml I specified the following settings - pretty standard:
<paging-directory>data/paging</paging-directory>
<bindings-directory>data/bindings</bindings-directory>
<journal-directory>data/journal</journal-directory>
<large-messages-directory>data/large-messages</large-messages-directory>
According to this official documentation page, there is login.conf file in etc directory which specifies users & roles files. I have the following contents:
activemq {
org.apache.activemq.artemis.spi.core.security.jaas.PropertiesLoginModule required
debug=false
reload=true
org.apache.activemq.jaas.properties.user="artemis-users.properties"
org.apache.activemq.jaas.properties.role="artemis-roles.properties";
};
Well, everything seem to work fine with it, but I noticed that every time I want to create a new user/role, I have to create twice, in each node separately. If I have replication HA mode and 6 nodes, I would need to create the same user/role 6 times (for each node).
Am I not missing anything here?
Then I've come up with an idea to literally move artemis-users.properties and artemis-roles.properties to a $ARTEMIS_INSTANCE/data directory and modify login.conf file accordingly, so I can create user/role only once, and created entries will be accessible from other node(s):
activemq {
org.apache.activemq.artemis.spi.core.security.jaas.PropertiesLoginModule required
debug=false
reload=true
org.apache.activemq.jaas.properties.user="../data/artemis-users.properties"
org.apache.activemq.jaas.properties.role="../data/artemis-roles.properties";
};
Since this is shared store, it kind of makes sense for me to store this way. I did quite some testing and everything seems to work fine, I do not think there are going to be any race conditions this way.
Again, am I not missing anything? Any suggestions/better workarounds?

The PropertiesLoginModule is provided by default because it is simple and straight-forward to configure for basic use-cases. However, it's not really designed for production use across a cluster. Typically you'd use an LDAP server (or some equivalent) which is a central store for all your user & role data. As the documentation states:
In general, using properties files and broker-centric user management for anything other than very basic use-cases is not recommended. The broker is designed to deal with messages. It's not in the business of managing users, although that functionality is provided at a limited level for convenience. LDAP is recommended for enterprise level production use-cases.
You are, of course, free to use the PropertiesLoginModule in more complex use-cases (e.g. like yours). I think putting the properties files on shared storage is fine and not likely to lead to problems.

Related

Mark standalone redis as read-only

I want to mark a standalone Redis server (not a Redis-Cluster, not a Redis-Sentinel) as read-only. I have been googling for this for quite sometime but I don't seem to find a definite answer (Almost all answers point to Clustering or Sentinel). I was looking out for some config modification (CONFIG SET something).
NOTE: config set replica-read-only yes does not make the current redis-server read-only, but only its replicas.
My use-case basically is I am doing a migration wherein at some point I want to make the redis-server read-only. My application code can handle failures whenever a write call happens so that's not an issue.
Also, if this is not directly possible from redis server, is there something that I can do in the client code that'll have the same effect (I am using redis-py as the client library)? (Although this is less than ideal)
Things that I've tried
Played around with config set replica-read-only yes and other configs. They don't seem to be applying the current redis-server.
Tried marking a redis-server as a replica of itself (This was illogical, but just wanted to see if this worked), but turns out it deleted all keys in my local redis, so not something I can do.
Once the writes are done and you want to switch the node to read-only, couple of ways to do that:
Modify the redis.conf to have "min-replicas-to-write 3". Since you don't have 3 replicas your node will stop accepting writes but will continue to serve reads, as shown below:
However, please note that after modifying redis.conf, you will have to restart your redis node for the changes to take effect.
Another way is when you want to switch to readonly mode, at that time you create a replica and let it sync with the master and then kill the master node. Then replica will exist as read only.
There're several solution you can try:
You can use the rename-command config to disable write commands. If you only want to disable small number of commands, that's a good solution. However, since there're too many write commands, you might need to have too many configuration, and easy to miss some of them.
If you're using Redis 6.0, you can use Redis ACL to disable write commands for specific users.
You can setup a read-only Redis replica for your master, and ask clients to read from the replica.

existdb: identify database server

We have a number of (developer) existDb database servers, and some staging/production servers.
Each have their own configuration, that are slightly different.
We need to select which configuration to load and use in queries.
The configuration is to be stored in an XML file within the repository.
However, when syncing the content of the servers, a single burnt-in XML file is not sufficient, since it is overwritten during copying from the other server.
For this, we need the physical name of the actual database server.
The only function found, request:get-server-name that is not quite stable since a single eXist server can be accessed through a number of various (localhost, intranet or external) URLs. However, that leads to unnecessary duplication of the configuration, one for each external URL...
(Accessing some local files in the file system is not secure and not fast.)
How to get the physical name of the existDb server from XQuery?
I m sorry but I don't fully understand your question, are you talking about exist's default conf.xml or your own configuration file that you need to store in a VCS repo? Should the xquery be executed on one instance and trigger an event in all others, or just some, or...? Without some code it is difficult to see why and when something gets overwritten.
you could try console:jmx-token which does not vary depending on URL (at least it shouldn't)
Also you might find it much easier to use a docker based approach. Either with multiple instances coordinated via docker-compose or to keep the individual configs from not interfering with each other when moving from dev to staging to production https://github.com/duncdrum/exist-docker
If I understand correctly, you basically want to be able to get the hostname or the IP address of a server from XQuery. If the functions in the XQuery Request module are not doing as you wish, then another option would be to set a Java System Property when starting eXist-db. This system property could be the internal DNS name or IP of your server, for example: -Dour-server-name=server1.mydomain.com
From XQuery you could then read that Java System property using util:system-property("our-server-name").

Disabling/Pause database replication using ML-Gradle

I want to disable the Database Replication from the replica cluster in MarkLogic 8 using ML-Gradle. After updating the configurations, I also want to re-enable it.
There are tasks for enabling and disabling flexrep in ML Gradle. But I couldn't found any such thing for Database Replication. How can this be done?
ml-gradle uses the Management API to handle configuration changes. Database Replication is controlled by sending a PUT command to /manage/v2/databases/[id-or-name]/properties. Update your ml-config/databases/content-database.json file (example that does not include that property) to include database-replication, including replication-enabled: true.
To see what that object should look like, you can send a GET request to the properties endpoint.
You can create your own command to set replication-enabled - see https://github.com/rjrudin/ml-gradle/wiki/Writing-your-own-management-task
I'll also add a ticket for making official commands - e.g. mlEnableReplication and mlDisableReplication, with those defaulting to the content database, and allowing for any database to be specified.

migrating mod_plsql application to Oracle REST Data Services

I read on MOS Doc ID 1945619.1 that starting with the 12.1.3 Oracle HTTP Server (OHS), the mod_plsql feature has been deprecated and will not be included with the 12.2 Oracle HTTP Server.
For the future, Oracle recommends moving to Oracle REST Data Services (formerly known as Oracle APEX Listener) as an alternative to mod_plsql.
Our shop have a lot of mod_plsql applications (i.e. applications written usinjg HTP/HTF packages) in production. Since I don't know anything about Oracle REST Data Services I'm asking you if we can migrate the old applications to this new product without changing the code.
Thank you.
Kind regards, Cristian
Doug McMahon (Oracle employee) has a great open source module for Apache.
Apache PL/SQL Gateway Module
(mod_owa)
https://oss.oracle.com/projects/mod_owa/dist/documentation/modowa.htm
I am using it in a production environment and I highly recommend it. It's really fast and rock solid.
You need to do some compiling but it's worth it being able to use Apache 2.4 and mod_plsql.
Steps:
download httpd 2.4.? from apache.org + extract
If Centos 6 or less download apr and apr-util
configure with enable-so, make and make install
./configure --enable-so --with-apr=/usr/local/apr --with-apr-util=/usr/local/apr
Download mod_owa + unzip
create empty directory. Copy all files from "apache24" into new folder. Copy all files from "src" to new folder
enter new folder and edit modowa.mk <-- important add $ORACLE_HOME, edit APACHE_TOP
Copy mod_owa.so to apache's modules
Add a modowa.conf in Apache's conf/ dir.
Example modowa.conf:
loadModule owa_module modules/mod_owa.so
<Location /pls>
Options None
SetHandler owa_handler
OwaUserid user/pass
OwaNLS AMERICAN_AMERICA.AL32UTF8
OwaPool 100
OwaStart "package.procedure"
OwaDocProc "wwv_flow_file_mgr.process_download"
OwaDocTable photos_upload BLOB_CONTENT
OwaUploadMax 50M
OwaCharset "utf8"
order deny,allow
allow from all
OwaReset LAZY
OwaCharsize 4
OwaFlex package.procedure
OwaHttp REST
</Location>
Before starting httpd ORACLE_HOME, NLS_LANG needs to be set (ORACLE_SID also if local). It needs access to an Oracle Home with libclntsh.so. (Oracle client will do).
I simply added oracle.conf (one line full path to oracle home/lib) under /etc/ld.so.conf.d (+ ldconfig)
Really scalable and a much cleaner setup then OHS.
My shop is pretty much in the same situation as you are.
We also have some very large mod_plsql/htp based applications and will have to migrate to the Oracle REST Data Services at some point.
We have already spend quite some time in testing different ORDS configuration and our overall conclusions are:
only APEX applications are fully supported
key functionality is still available
harder to configure and maintain
slight performance degradation
some mod_plsql configuration options do no longer exist or have changed
The biggest problems we are currently facing (and actually preventing us from switching to ORDS) are some restrictions when using non-APEX (pure HTF/HTP) applications.
We already filed some SR's because some functionality in ORDS (for example the file upload and download API) is only available when running an APEX application.
The biggest hurdle to get over is setting up Oracle Rest Services (ORS) and securing it. Once this is done, your web toolkit apps will work the same. The url may slightly change, so if you've referenced URLs using full paths as opposed to relative paths you might need modify code.
I am not sure if ORS is as powerful as Apache in areas like mod_rewrite, mod_proxy, virtual hosts with multiple ip addresses, etc...
Another open source alternative is tox.

SCM management of AppFabric Cache Cluster

I'm working on building out a standard set of configurations for our cache clusters within App Fabric. My goal is to have a repeatable cache settings configuration when we load up a new environment (so server names are different, number of hosts, and other environmental factors).
My initial pass was to utilize the XML available from Export-CacheClusterConfig and simply change server names and size attributes in the <hosts> section, but I'm not sure what else is automatically registered with those values (the hostId parameter, for example).
My next approach that I've considered is a PowerShell script to simply build up the various caches with the correct parameters passed in that would simply run as a post-deploy step.
Anyone else have experience with repeatable AppFabric cache cluster deployments?
After trying both, the more successful option seems to be a combination of two factors. Management of the Cache Cluster (host information) is primarily an operations concern and is managed best by the operations team (i.e. those guys that read Server Fault). Since this information is stored in the configuration as well (and would require an XML file obtained from Export-CacheClusterConfig for each environment) it's best left to the operations team on how they want to manage it. Importing the wrong file (with the incorrect host information) has led to a number of issues.
So, we're left with PowerShell scripts. Here's a sample that I have. It could be cleaned up (check for Cache existence first) but you get the general idea. It's also much easier to store in source control (as it's just one file).
New-Cache -CacheName CRMTickets -Eviction None -Expirable false -NotificationsEnabled true
New-Cache -CacheName ConsultantCache -Eviction Lru -Expirable true -TimeToLive 60
New-Cache -CacheName WorkitemCache -Eviction None -Expirable true -TimeToLive 60

Resources