Easy way to switch scylla cluster to large aws instances - amazon-ec2

I have a scylla cluster running on AWS i3en.xlarge instances which has 16 nodes.
Is there an easy way for me to switch the cluster to i3en.2xlarge or i3en.4xlarge other than replacing existing node one by one (e.g. add a new node and remove a node)?
If I add one i3en.2xlarge instance, will the cluster auto balances the data so that on the i3en.2xlarge it uses roughly twice the disk space as i3en.xlarge?

You can add a logical DC of the new nodes, run repair and then get rid of the original DC

Add a new DC with the desired instance type (see the procedure #TzachLivyatan posted in his comment)
Wait for streaming to the new DC to complete
Run a full cluster repair -> wait for it to complete
Decommission the "original" DC:
https://docs.scylladb.com/operating-scylla/procedures/cluster-management/decommissioning_data_center/

Related

How to find the right portion between hadoop instance types

I am trying to find out how many MASTER, CORE, TASK instances are optimal to my jobs. I couldn't find any tutorial that explains how do I figure it out.
How do I know if I need more than 1 core instance? What are the "symptoms" I would see in EMR's console in the metrics that would hint I need more than one core? So far when I tried the same job with 1*core+7*task instances it ran pretty much like on 8*core, but it doesn't make much sense to me. Or is it possible that my job is so much CPU bound that the IO is such minor? (I have a map-only job that parses apache log files into csv file)
Is there such a thing to have more than 1 master instance? If yes, when is it needed? I wonder, because my master node pretty much is just waiting for the other nodes to do the job (0%CPU) for 95% of the time.
Can the master and the core node be identical? I can have a master only cluster, when the 1 and only node does everything. It looks like it would be logical to be able to have a cluster with 1 node that is the master and the core , and the rest are task nodes, but it seems to be impossible to set it up that way with EMR. Why is that?
The master instance acts as a manager and coordinates everything that goes in the whole cluster. As such, it has to exist in every job flow you run but just one instance is all you need. Unless you are deploying a single-node cluster (in which case the master instance is the only node running), it does not do any heavy lifting as far as actual MapReducing is concerned, so the instance does not have to be a powerful machine.
The number of core instances that you need really depends on the job and how fast you want to process it, so there is no single correct answer. A good thing is that you can resize the core/task instance group, so if you think your job is running slow, then you can add more instances to a running process.
One important difference between core and task instance groups is that the core instances store actual data on HDFS whereas task instances do not. In turn, you can only increase the core instance group (because removing running instances would lose the data on those instances). On the other hand, you can both increase and decrease the task instance group by adding or removing task instances.
So these two types of instances can be used to adjust the processing power of your job. Typically, you use ondemand instances for core instances because they must be running all the time and cannot be lost, and you use spot instances for task instances because losing task instances do not kill the entire job (e.g., the tasks not finished by task instances will be rerun on core instances). This is one way to run a large cluster cost-effectively by using spot instances.
The general description of each instance type is available here:
http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/InstanceGroups.html
Also, this video may be useful for using EMR effectively:
https://www.youtube.com/watch?v=a5D_bs7E3uc

Aerospike's behavior on EC2

In my test setup on EC2 I have done following:
One Aerospike server is running in ZoneA (say Aerospike-A).
Another node of the same cluster is running in ZoneB (say Aerospike-B).
The application using above cluster is running in ZoneA.
I am initializing AerospikeClinet like this:
hosts= new Host[];
hosts[0] = new Host(PUBLIC_IP_OF_AEROSPIKE-A, 3000);
AerospikeClient client = new AerospikeClient(policy, hosts);
With above setup I am getting below behavior:
Writes are happening on both Aerospike-A and Aerospike-B.
Reads are only happening on Aerospike-A (data is around 1million records, occupying 900MB of memory and 1.3 GB of disk)
Question: Why are reads not going to both the nodes?
If I take Aerospike-B down, everything works perfectly. There is no outage.
If I take Aerospike-A down, all the writes and reads start failing. I've waited for 5 mins for other node to take traffic but it didn't work.
Questions:
a. In above scenario, I would expect Aerospike-B to take all the traffic. But this is not happening. Is there anything I am doing wrong?
b. Should I be giving both the hosts while initializing the client?
c. I had executed "clinfo -v 'config-set:context=service;paxos-recovery-policy=auto-dun-all'" on both the nodes. Is that creating a problem?
In EC2 you should place all the nodes of the cluster in the same AZ of the same region. You can use the rack awareness feature to set up nodes in two separate AZs, however you will be paying with a latency hit on each one of your writes.
Now what your seeing is likely due to misconfiguration. Each EC2 machine has a public IP and a local IP. Machines sitting on the same subnet may access each other through the local IP, but a machine from a different AZ cannot. You need to make sure your access-address is set to the public IP in the event that your cluster nodes are spread across two availability zones. Otherwise you have clients which can reach some of the nodes, lots of proxy events as the cluster nodes try to compensate and move your reads and writes to the correct node for you, and weird issues with data upon nodes leaving or joining the cluster.
For more details:
http://www.aerospike.com/docs/operations/configure/network/general/
https://github.com/aerospike/aerospike-client-python/issues/56#issuecomment-117901128

why map task always running on a single node

I have a Fully-Distributed Hadoop cluster with 4 nodes.When I submit my job to Jobtracker which decide 12 map tasks will be cool for my job,something strange happens.The 12 map tasks always running on a single node instead of running on the entire cluster.Before I ask the question ,I have already done the things below:
Try different Job
Run start-balance.sh to rebalance the cluster
But it does not work,so I hope someone can tell me why and how to fix it.
If all the blocks of input data files are in that node, the scheduler with prioritize the same node
Apparently the source data files is in one data node now. It could't be the balancer's fault. From what I can see, your hdfs must only have one replication or you are not in a Fully-Distributed Hadoop cluster.
Check how your input is being split. You may only have one input split, meaning that only one Node will be used to process the data. You can test this by adding more input files to your stem and placing them on different nodes, then checking which nodes are doing the work.
If that doesn't work, check to make sure that your cluster is configured correctly. Specifically, check that your name node has paths to your other nodes set in its slaves file, and that each slave node has your name node set in its masters file.

how to know on which node job will executes in Quartz-Scheduler with TerracottaJobStore?

I have bind terracotteJobStore with Quartz-Scheduler
how can terracotteJobStore determine which job should next for which node for execution?
which algorithm uses for node selection in terracotteJobStore any idea ??
If 'Quartz Scheduler' is used with 'TerracotteJobStore' ,and there is any Job next to execute then selection of Node for that Job will be Random.
Using 'Qurtz Where' it is possible to make Job on criteria base.
Means if u want to make a Job that must run on a Node which have core at least 2 or
to make a Job which run on a Node which have 70% CPU load average or
to make a Job which run on a Node which have at least Java Heap Free memory 330 MB
in such case 'Quartz Where' is useful.
It is predictable on which Node , Job will execute only in the case of "Quartz Where'.
With OS Terracotta's JobStore you don't get to decide which node the job will be executed on. Not that it really happens randomly, but the scheduler behaves as in non-clustered mode. So basically, every node will, at a regular interval and, based on the next trigger to fire when to acquire the next trigger(s). Since all the nodes in cluster behave the same way, the first to acquire the lock, will also be able to acquire triggers first.
Terracotta EE comes with the Quartz Where feature that lets you describe where jobs should be fired. You learn more on Quartz Where by watching this short screencast I did: http://www.codespot.net/blog/2011/03/quartz-where-screencast/
Hope that'll help.

How can I add new nodes to a live hbase/hadoop cluster?

I run some batch jobs with data inputs that are constantly changing and I'm having problems provisioning capacity. I am using whirl to do the intial setup but once I start, for example, 5 machines I don't know how to add new machines to it while its running. I don't know in advance how complex or how large the data will be so I was wondering if there was a way to add new machines to a cluster and have it take effect right away(or with some delay but don't want to have to bring down the cluster and bring it up with the new nodes).
There is exact explanation how to add node:
http://wiki.apache.org/hadoop/FAQ#I_have_a_new_node_I_want_to_add_to_a_running_Hadoop_cluster.3B_how_do_I_start_services_on_just_one_node.3F
In the same time - I am not sure that already running jobs will take advantages of these nodes since planning where to run each task happens during job start time (as far as I understand).
I also think that it is more practical to run Task Trackers only on these transient nodes.
Check the files referred by the below parameters:
dfs.hosts => dfs.include
dfs.hosts.exclude
mapreduce.jobtracker.hosts.filename => mapred.include
mapreduce.jobtracker.hosts.exclude.filename
You can add the list of hosts to the files dfs.include and mapred.include and then run
hadoop mradmin -refreshNodes ;
hadoop dfsadmin -refreshNodes ;
That's all.
BTW, 'mradmin -refreshNodes' facility was added in 0.21
Nikhil

Resources