ScriptEvaluate against specific server in cluster - stackexchange.redis

Is it possible to execute ScriptEvaluate against an specific server in a cluster?
For example trying to find out the number of keys on each instance
_db.ScriptEvaluate("return #redis.call('keys', 'moo.*')");

Related

Elasticsearch Cross Cluster Search behind nginx proxy

I want to setup sort of aggregation for multiple Elasticsearch clusters based on Cross Cluster Search feature.
I have the following layout:
As seed for Cross Cluster Search I am using the only available via network cluster address.
After querying I am getting error:
[elasticsearch][172.16.10.100:9300] connect_timeout[30s]
I can't change publish_host for nodes, because that address used inside the cluster for node communication.
Is there any option to force Cross Cluster Search to use only provided address?
Or any other way to setup kinda proxy for user to be able to search/visualize in kibana data from multiple isolated elasticsearch clusters?
I believe that the only solution is to upgrade to Elasticsearch 7, which provides the cluster.remote.${cluster_alias}.proxy option where you can specify the incoming IP address for the cross cluster search.

RethinkDB local and cloud clusters connection

Just thinking about app arhitecture and whant to know is it possible at all to create local cluster for specific tables and connect it with cloud cluster?
And additional question - is it possible to choose where to create shard (on what machine) for particular table (to show cloud cluster that for this table i need shards in local cluster)?
As example, I whant to have table db.localTable be sharded in local cluster to reduce latency and increase performance due to run queries in local cluster and also have ability to run queries in cloud cluster when local cluster is not accessible. All data between clusters should be consistent.
Tnx in advance.
Actually, I've found the solution: to set specific servers for replicas and servers for shards you should use server-tags and perform changes using ReQL and tables setting. For details see - RethinkDB - Scaling, sharding and replication and RethinkDB - Architecture FAQ

Arangodump between two different AWS ec2 clusters

I have created a Graph database in ArangoDB in a 5 machine AWS cluster. I do not have enough space in the Database AWS cluster to store the dump. So I would like to take a dump of the database in an AWS instance in a different cluster. I have the key files to connect to the machines. How to do it using Arangodump ? Thanks.
I do get that correctly that you're using DC/OS clusters on AWS?
The problem with arangoimp is, that it doesn't know howto authenticate with the DC/OS proxy, and thus can't reach the routes it would require to import to arangodb.
The problem is similar to Running Arango Shell on DC/OS cluster - you want to use sshutle as lalitlogical describes to forward the ArangoDB server port (usually 8529) to your target environment.

how to start elastic search in non cluster mode

I have two different machines running elastic search server instances. They automatically create a cluster and changes made on one instance reflect on other instance on different machine. I have changed the cluster.name property in elasticsearch.yml file in config folder and the issue is resolved. I wanted to know if i can start elastic search server instance in non-cluster mode ?
You can't start the es server in non-cluster mode.
But if you want the two servers to run independently (in its own cluster), there are 2 options that I can think of:
Disable multicast and don't set the hosts for them in unicast
Change the cluster.name to make them have different names
The easiest is to set node.local: true
This prevents elasticsearch from trying to connect to other nodes.
Using a custom name is also a good idea in any case just to prevent unintended exchange of data. Use something else for production, testing, and development.

Assigning the host name to a hadoop job using weka in AWS

I have been using the wekaDistributedHadoop1.0.4 and wekaDistributedBase1.0.2 packages on my local machine to run some of the basic jobs. There is a field "HDFS host" which must be filled in order to run the jobs. I have been using "localhost" since I have been testing on my local machine and this works fine. I blindly tried using "localhost" when running on AWS EMR but the job failed. What I would like to know is what host name should I be entering into the field so that weka will call on the correct master? Is it the public DNS name which is provided when starting the cluster or is there a method in the API which gets that address for me?
If you want to manually do it.
Create a cluster and keep it alive, you can find info in amazon ec2 instances manage console, in security group elastic mapreduce master/slave. Find it out, login master node and edit conf file and fill with right name.
If you need automatically do it.
Write a shell executed in bootstrap. You can refer to https://serverfault.com/questions/279297/what-is-the-easiest-way-to-get-a-ec2-public-dns-inside-a-running-instance

Resources