Disabling/Pause database replication using ML-Gradle - gradle

I want to disable the Database Replication from the replica cluster in MarkLogic 8 using ML-Gradle. After updating the configurations, I also want to re-enable it.
There are tasks for enabling and disabling flexrep in ML Gradle. But I couldn't found any such thing for Database Replication. How can this be done?

ml-gradle uses the Management API to handle configuration changes. Database Replication is controlled by sending a PUT command to /manage/v2/databases/[id-or-name]/properties. Update your ml-config/databases/content-database.json file (example that does not include that property) to include database-replication, including replication-enabled: true.
To see what that object should look like, you can send a GET request to the properties endpoint.

You can create your own command to set replication-enabled - see https://github.com/rjrudin/ml-gradle/wiki/Writing-your-own-management-task
I'll also add a ticket for making official commands - e.g. mlEnableReplication and mlDisableReplication, with those defaulting to the content database, and allowing for any database to be specified.

Related

Janusgraph doesn't allow to set vertex Id even after setting this property `graph.set-vertex-id=true`

I'm running janusgraph server backed by Cassandra. It doesn't allow me to use custom vertex ids.
I do see the following log when the janusgraph gremlin server is starting.
Local setting graph.set-vertex-id=true (Type: FIXED) is overridden by globally managed value (false). Use the ManagementSystem interface instead of the local configuration to control this setting
Even tried to set this property via management API and still no luck.
gremlin> mgmt = graph.openManagement()
gremlin> mgmt.set('graph.set-vertex-id', true)
As the log message already states, this config option has the mutability FIXED which means that it is a global configuration option. Global configuration is described in this section of the JanusGraph documentation.
It states that:
Global configuration options apply to all instances in a cluster.
JanusGraph stores these configuration options in its storage backend which is Cassandra in your case. This ensures that all JanusGraph instances have the same values for these configuration values. Any changes that are made to these options in a local file are ignored because of this. Instead, you have to use the management API to change them which will update them in the storage backend.
But that is already what you tried with mgmt.set(). This doesn't work in this case however because this specific config option has the mutability level FIXED. The JanusGraph documentation describes this as:
FIXED: Like GLOBAL, but the value cannot be changed once the JanusGraph cluster is initialized.
So, this value can really not be changed in an existing JanusGraph cluster. Your only option is to start with a new cluster if you really need to change this value.
It is of course unfortunate that the error message suggested to use the management API even though it doesn't work in this case. I have created an issue with the JanusGraph project to improve this error message to avoid such confusion in the future: https://github.com/JanusGraph/janusgraph/issues/3206

Search Templates | Loading & Caching the script file locally

Context:
We are moving from ES 5.X to ES 7.X
Earlier we were using JEST Client, now we are planning to use ES High-Level Client
Our search queries are complex and we are planning to use SearchTemplate API
We will store template files locally & cache them to reduce the overhead of I/O
What I have tried so far:
I've read the documentation of EHLC and I can't find a mechanism to load & cache script files directly from the file system
I can see that we can store the script in E.S which we don't want to do, assumingly we won't be having changelogs there.
Question:
Is there an inbuilt mechanism to use the locally stored file as a script in EHLC? OR we shall use inline scripts and load & cache the script file using custom code
Based on the comments I'd suggest the following:
Keep track of the templates with git.
Monitor the changes and trigger a pub/sub message whenever applicable (PR merges etc.).
Configure your pub/sub handler to update the stored search template in ES.
Otherwise, when we talk about local loading + caching, the machines with slightly older EHLC processes wouldn't get notified about the most recent changes in git and would continue using stale scripts.

Mark standalone redis as read-only

I want to mark a standalone Redis server (not a Redis-Cluster, not a Redis-Sentinel) as read-only. I have been googling for this for quite sometime but I don't seem to find a definite answer (Almost all answers point to Clustering or Sentinel). I was looking out for some config modification (CONFIG SET something).
NOTE: config set replica-read-only yes does not make the current redis-server read-only, but only its replicas.
My use-case basically is I am doing a migration wherein at some point I want to make the redis-server read-only. My application code can handle failures whenever a write call happens so that's not an issue.
Also, if this is not directly possible from redis server, is there something that I can do in the client code that'll have the same effect (I am using redis-py as the client library)? (Although this is less than ideal)
Things that I've tried
Played around with config set replica-read-only yes and other configs. They don't seem to be applying the current redis-server.
Tried marking a redis-server as a replica of itself (This was illogical, but just wanted to see if this worked), but turns out it deleted all keys in my local redis, so not something I can do.
Once the writes are done and you want to switch the node to read-only, couple of ways to do that:
Modify the redis.conf to have "min-replicas-to-write 3". Since you don't have 3 replicas your node will stop accepting writes but will continue to serve reads, as shown below:
However, please note that after modifying redis.conf, you will have to restart your redis node for the changes to take effect.
Another way is when you want to switch to readonly mode, at that time you create a replica and let it sync with the master and then kill the master node. Then replica will exist as read only.
There're several solution you can try:
You can use the rename-command config to disable write commands. If you only want to disable small number of commands, that's a good solution. However, since there're too many write commands, you might need to have too many configuration, and easy to miss some of them.
If you're using Redis 6.0, you can use Redis ACL to disable write commands for specific users.
You can setup a read-only Redis replica for your master, and ask clients to read from the replica.

Is it possible to make a runtime db connection and use it in Schema, DB and models without effecting configs?

I want to use dynamic databases on runtime without effecting config/database.php because of concurrent users.
I have a main db with a table that contains reference to several other dbs. Now at runtime I need to not only connect to those dbs but also may want to run migrations on them.
I am aware that this is possible by having a second connection entry in config.database.connections but I have a feeling that if two users hit the server at the same time, the physical config file changes may create a conflict.
I also read (and also experimented) that you can edit the second connection using below code at runtime:
\Config::set('database.connections.mysql2.database', 'somedynamicdb');
DB::purge('mysql2');
But I fear that if it persists changes for different users, then it may conflict for concurrent users. And if it does not persist changes, then it wont work for migrations.
I want to understand/know two things specifically:
What is the scope of this above code (i.e. Config::set() call)? Does it persist over different user calls to the server?
If I call migrations using Artisan::call('migrate') with a --database=connectionname clause, right after I change the db name in connectionname, will that use the dynamically set database or the physical config value?
UPDATE
Also worth noting that a call to Artisan::call('migrate') with a --database=connectionname, will make the new connection persist for the rest of your app call.
See here for details:
https://github.com/laravel/framework/issues/28253
Config::set will only apply for the request for which it was set, won't apply to any other requests, and will not persist beyond the request. If you're not processing a request (e.g. a CLI command) then it won't affect anything beyond the current PHP process.
As for Item #2, if you're invoking from the command line, you can just do DB_CONNECTION=connectionname php artisan migrate. If you need to invoke the artisan command from code, using Config::set is still the right way to go.
We use connection created on the fly here all time and works very well. We setup this on Middleware that we included after authentication and is only valid on the user current user request based on login information.

AWS RDS database can't read record that was just written to database

I'm seeing an error with some Laravel code that uses an AWS RDS database. The code writes a record to the database and then immediately does a search to load that record using the primary key and gets no results.
If I try it manually afterwards I find the record. If I insert a 1-second sleep in the code it works correctly.
I've tried this using Laravel's separate settings for read and write hosts. I've also tried setting them to the same host and only using one host. The result is always the same. However other environments with the same configuration do not have the error.
Is there an option in RDS that needs to be changed to have the record available immediately after it's written.
The error is due to the mySQL master-slave replication lag.
A common mistake is to use a mySQL cluster and then perform a read
immediately after a write.
Since the read occurs on one of the slave/read hosts and the write occurs on the master, the data would not be replicated at the time of the read.
There are a couple of ways to rectify the error:
The read immediately after must be performed on the master (not the slave). Even though you've mentioned that you changed it to a single host, often people make a mistake while switching the connection. Refer this SO post to properly switch connections in Laravel
An easier way may be to use the sticky database option in Laravel. Beware: this may cause performance issues if not used carefully for only the use case you desire. From the docs:
The sticky option is an optional value that can be used to allow the
immediate reading of records that have been written to the database
during the current request cycle.
If the sticky option is enabled and a "write" operation has been
performed against the database during the current request cycle, any
further "read" operations will use the "write" connection.
The most "non-obvious" way is to NOT perform a read immediately after a write. Think about whether this can be avoided depending on your use case.
Other methods: refer this SO post

Resources