Couchbase/Elasticsearch connector for multiple buckets - elasticsearch

Is there a way to replicate 2 or many couchbase buckets to elasticsearch using a single configuration file?
I actually use this version of the couchbase elasticsearch connector:
https://docs.couchbase.com/elasticsearch-connector/4.0/index.html
I do replicate my data correctly, but need to run a command per bucket using a different configuration file (.toml) each time.
Could not by the way run the cbes command multiple times on the same server as the metrics port 31415 is already in use.
Is there any way to handle many connector groups in one time?

In version 4.0 a single connector process can replicate from only one bucket. This is because the indexing rules and all of the underlying network connections to Couchbase Server are scoped to the bucket level.
The current recommendation is to create multiple config files and run multiple connector processes. It's understood that this can be complicated to manage if you're replicating a large number of buckets.
If you're willing to get creative, you could use the same config file template for multiple buckets. The idea is that you'd write a config file with some placeholders in it, and then generate the actual config file by running a script that replaces the placeholders with the correct values for each connector.
The next update to the connector will add built-in support for environment variable substitution in the config file. This could make the templating approach easier.
Here are some options for avoiding the metrics port conflict:
Disable metrics reporting by setting the httpPort key in the [metrics] section to -1.
OR Use a random port by setting it to 0.
OR Use the templating idea described above, and plug a unique port number into each generated config file.
It's worth mentioning that a future version of the connector will support something we're calling "Autonomous Operations Mode". When the connector runs in this mode, the configuration will be stored in a central location (probably a Consul server). It will be possible to reconfigure a connector group on-the-fly, and add or remove workers to the group without having to stop all the workers and edit their config files. Hopefully this will simplify the management of large deployments.

Related

Azure Databricks processing files differently based on the confuguration

We've an application which processes huge file(excel) and calculate data from that file based on different conditions(written/coded in scala notebook).
The issue which we're facing is the inconsistency of results produced by the same file for different time and/or different configuration for Azure Databricks Compute.
We've already double check our scala notebook code and which doesn't has any bug, it might be something from configuration end(not sure).
Below is the current configuration of my dev compute

hazelcast-jet deployment and data ingestion

I have a distributed system running on AWS EC2 instances. My cluster has around 2000 nodes. I want to introduce a stream processing model which can process metadata being periodically published by each node (cpu usage, memory usage, IO and etc..). My system only cares about the latest data. It is also OK with missing a couple of data points when the processing model is down. Thus, I picked hazelcast-jet which is an in-memory processing model with great performance. Here I have a couple of questions regarding the model:
What is the best way to deploy hazelcast-jet to multiple ec2 instances?
How to ingest data from thousands of sources? The sources push data instead of being pulled.
How to config client so that it knows where to submit the tasks?
It would be super useful if there is a comprehensive example where I can learn from.
What is the best way to deploy hazelcast-jet to multiple ec2 instances?
Download and unzip the Hazelcast Jet distribution on each machine:
$ wget https://download.hazelcast.com/jet/hazelcast-jet-3.1.zip
$ unzip hazelcast-jet-3.1.zip
$ cd hazelcast-jet-3.1
Go to the lib directory of the unzipped distribution and download the hazelcast-aws module:
$ cd lib
$ wget https://repo1.maven.org/maven2/com/hazelcast/hazelcast-aws/2.4/hazelcast-aws-2.4.jar
Edit bin/common.sh to add the module to the classpath. Towards the end of the file is a line
CLASSPATH="$JET_HOME/lib/hazelcast-jet-3.1.jar:$CLASSPATH"
You can duplicate this line and replace -jet-3.1 with -aws-2.4.
Edit config/hazelcast.xml to enable the AWS cluster discovery. The details are here. In this step you'll have to deal with IAM roles, EC2 security groups, regions, etc. There's also a best practices guide for AWS deployment.
Start the cluster with jet-start.sh.
How to config client so that it knows where to submit the tasks?
A straightforward approach is to specify the public IPs of the machines where Jet is running, for example:
ClientConfig clientConfig = new ClientConfig();
clientConfig.getGroupConfig().setName("jet");
clientConfig.addAddress("54.224.63.209", "34.239.139.244");
However, depending on your AWS setup, these may not be stable, so you can configure to discover them as well. This is explained here.
How to ingest data from thousands of sources? The sources push data instead of being pulled.
I think your best option for this is to put the data into a Hazelcast Map, and use a mapJournal source to get the update events from it.

How to send updated data from java program to NiFi?

I have micro-services running and when a web user update data in DB using micro-service end-points, I want to send updated data to NiFi also. This data contains updated list of names, deleted names, edited names etc. How to do it? which processor I have to use from NiFi side?
I am new to NiFi. I am yet to try anything from my side. I am reading google documents which can guide me.
No source code is written. I want to start it. But I will share here once I write it.
Expected result is NiFi should get updated list of names and NiFi should refer updated list for generating required alerts/triggers etc.
You can actually do it in lots of ways. MQ, Kafka, HTTP(usinh ListenHTTP). Just deploy the relevant one to you and configure it, even listen to a directory(using ListFile & FetchFile).
You can connect NiFi to pretty much everything, so just choose how you want to connect your micro services to NiFi.

Solutions for a secure distributed cache

Problem: I want to cache user information such that all my applications can read the data quickly, but I want only one specific application to be able to write to this cache.
I am on AWS, so one solution that occurred to me was a version of memcached with two ports: one port that accepts read commands only and one that accepts reads and writes. I could then use security groups to control access.
Since I'm on AWS, if there are solutions that use out-of-the box memcached or redis, that'd be great.
I suggest you use ElastiCache with one open port at 11211(Memcached)then create an EC2 instance, set your security group so only this server can access to your ElastiCache cluster. Use this server to filter your applications, so only one specific application can write to it. You control the access with security group, script or iptable. If you are not using VPC, then you can use cache security group.
I believe you can accomplish this using Redis (instead of Memcached) which is also available via ElastiCache. Once the instance has been created, you will want to create a replication group and associate it to the cache cluster you already launched.
You can then add instances to the replication group. Instances within the replication group are simply replicated from the Master Cache Cluster (single Redis instance) and so are (by default) read-only.
So, in this setup, you have a master node (single endpoint) that you can write to and as many read nodes (multiple endpoints) as you would like.
You can take security a step further and assign different routing rules to the replication group (via the VPC) so the applications reading data does not have access to the master node (the only one that can write data).

Splitting a Redis RDB file

Currently I'm using redis on a EC2 machine, with 60G RAM without any slaves, but as my data grows I will need more memory.
I was thinking to migrate to 2 x 60G machines and split the already existing data between the two.
Is there any tool for splitting the RDB file? I haven't found anything specifically designed for this.
If you want to split your data, you will need to have a way to shard your keys so some keys will be written/read from server A and the others from server B
There is no way to split a RDB file, but there is something you can do to achieve what you want.
First what you can do is start a redis instance on your second server and say it is a slave of your current server, but set the param slave-read-only to false. This will cause the slave to synchronize and read all of your redis data from master. So far you only have a slave with all the data, but now we will do the interesting bit.
Then you need to decide on a sharding strategy. Some redis clients do this for you. For example, the official Ruby client knows how to handle that if you configure it. You will need to configure your client so keys will be sharded to A and B (or alternative use twemproxy so the clients won't know about different servers and the twemproxy will take care of it)
Once you have the clients configure, you need to deploy the new clients to production and immediately configure the slave as not a slave anymore. You can do this directly using the CONFIG command on the slave server (don't forget to persist the config using CONFIG REWRITE) or you can change the config file of the slave and restart, whatever is more convenient for you. Since the slave is configured as slave-read-only false, it will accept writes even on slave mode. This means if you change the config directly from the redis-cli you can change from slave to just a sharded stand-alone redis without restarting, which I think is quite cool.
Be aware once you shard, you will have to be careful with MULTI commands or when using LUA scripts. If you are using twemproxy you won't be able to use those commands, but if you are sharding on the client side, you will still be able to use MULTI or LUA. Just be careful to use a sharding mechanism in which all the related keys will stay on the same server.
step1: install https://github.com/leonchen83/redis-rdb-cli/
step2: create a config file to set spliting condition
content of nodes.conf
34b6e1dfb871ad30398ef5edd6b9a954617e6ec1 127.0.0.1:10003#20003 master - 0 1531044047088 3 connected 8193-16383
89d020a7e727e81f003836207902ae26fe05fd51 127.0.0.1:10001#20001 myself,master - 0 1531044047000 1 connected 0-8192
vars currentEpoch 6 lastVoteEpoch 0
step3: run rdt -s your-dump.rdb -c nodes.conf -o /path/to
after step3. that will generate 2 rdb files in /path/to directory 34b6e1dfb871ad30398ef5edd6b9a954617e6ec1.rdb and 89d020a7e727e81f003836207902ae26fe05fd51.rdb

Resources