Does RDS proxy support automatic or transparent read/write splitting - amazon-aurora

Can we do automatic or transparent read/write splitting using amazon RDS proxy or we need to explicitly give reader and writer point to split that?

No, RDS proxy does not support read write split by own.
Here is the link for your help :
https://aws.amazon.com/rds/proxy/

Related

Encrypting SystemDisk in Alibaba ECS

On checking the following documentation for Alibaba Cloud ECS :-
https://www.alibabacloud.com/help/doc-detail/59643.htm
https://www.alibabacloud.com/help/doc-detail/25499.htm?#CreateInstance
https://www.alibabacloud.com/help/doc-detail/25517.htm
I see that there's an option to enable encryption for the Data Disks using the following option(s) -
Set the parameter DataDisk.n.Encrypted (CreateInstance) or Encrypted (CreateDisk) to true.
However, I don't see a similar option for encrypting the SystemDisk for the ECS instance while creating the instance / or in ModifyDiskAttribute
Is there an option for doing this which is perhaps not documented ?
Missed checking this in the documentation, it's present in the Limits section of the following article:
https://www.alibabacloud.com/help/doc-detail/59643.htm
You can only encrypt data disks, not system disks.
Also, as mentioned "Data in the instance operating system is not encrypted." on the following official Alibaba Cloud blog:
https://www.alibabacloud.com/blog/data-encryption-at-storage-on-alibaba-cloud_594581
And I think reason behind it would be that the Encryption on the system disk will slow down the processing capabilities.

ZFS on AWS /dev/ names when creating a pool

When creating ZFS pools on Linux it is recommended to avoid names like /dev/sdX, /dev/hdX because those mappings are not persistent and tend to change between restarts. So instead we use /dev/by-id or /dev/disk/by-path/
What I observed is that in AWS /dev/by-id is not populated. So what is the best approach for AWS?
We can use /dev/by-path for Zpools in AWS. /dev/by-path is populated by default. As recommended in https://github.com/zfsonlinux/zfs/wiki/faq#selecting-dev-names-when-creating-a-pool using /dev/by-path is one of the recommended approaches along with /dev/by-id

opendj: is it possible to use subentry based password policy for pass-through?

I know opendj can create a policy of pass-through in cn=config, but cn=config will not be replicated.
I'm wondering if it's possible to create such a pass-through policy for sub-entry based ? Thus, replica can work.
And my another requirement is that the pass-through policy can be changed during runtime.
If yes, is there any document or example that I can learn?
Thanks
Policies can be change at runtime and the change will be applied immediately without a server restart (like most of OpenDJ configuration).
But there is no support for Passthrough AuthN policies as subentries in OpenDJ for now.
How many pass-through policies do you think you will need to configure ?

Local mongo server with mongolab mirror & fallback

How to set up a local mongodb with mirror on mongolab (propagate all writes from local to mongolab, so they are always synchronized - I don't care about atomic, just that it syncs in a reasonable time frame)
How to use mongolab as a fallback if local server stops working (Ruby/Rails, mongo driver and mongoid).
Background: I used to have a local mongo server but it kept crashing occasionally and all my apps stopped working + I had to "repair" the DB to restart it. Then I switched to mongolab which I am very satisfied with, but it's generating a lot of traffic which I'd like to avoid by having a local "cache", but without having to worry about my local cache crashing causing all my apps to stop working. The DBs are relatively small so size is not an issue. I'm not trying to eliminate the traffic overhead of communicating to mongolab, just lower it a bit.
I'm assuming you don't want to have the mongolab instance just be part of a replica set (or perhaps that is not offered). The easiest way would be to add the remote mongod instance as a hidden member (priority 0) and just have it replicate data from your local instance.
An alternative immediate solution you could use is mongooplog which can be used to poll the oplog on one server and then apply it to another. Essentially replication on demand (you would need to seed one instance appropriately etc. and would need to manage any failures). More information here:
http://docs.mongodb.org/manual/reference/mongooplog/
The last option would be to write something yourself using a tailable cursor in your language of choice to feed the oplog data into the remote instance.

What is a good way to access external data from aws

I would like to access external data from my aws ec2 instance.
In more detail: I would like to specify inside by user-data the name of a folder containing about 2M of binary data. When my aws instance starts up, I would like it to download the files in that folder and copy them to a specific location on the local disk. I only need to access the data once, at startup.
I don't want to store the data in S3 because, as I understand it, this would require storing my aws credentials on the instance itself, or passing them as userdata which is also a security risk. Please correct me if I am wrong here.
I am looking for a solution that is both secure and highly reliable.
which operating system do you run ?
you can use an elastic block storage. it's like a device you can mount at boot (without credentials) and you have permanent storage there.
You can also sync up instances using something like Gluster filesystem. See this thread on it.

Resources