Can Master Ec2 machine also act as worker machine? - amazon-ec2

I have a scenario where I have spinned up 2 ec2 instances. One acts as a master and the other as the worker. I am running the test in step load mode.
I wanted to check if I can use the master to act as a worker (while running in step load- since I need the UI) or do I need to spin a new ec2 instance for the 2nd worker. Thanks!

Yes, you can run multiple instances of Locust on the same machine, including in EC2. The master won't act like a worker but if you separately run another Locust process as a worker just like the other EC2 instances, a worker process can connect to the master process running on the same instance in the same way.

Related

if an aws spot instance is stopped by AWS and then restarts will it just start where it left off?

I am running luigi, a pipeline manager which is processing 1000 tasks. Currently I poll for the AWS termination notice. If it is present then I requeue the job; wait 30 minutes; then launch a new server starting all the tasks from scratch. However sometimes it restarts the same job multiple times which is inefficient.
Instead I am considering using create_fleet with InstanceInterruptionBehaviour=Stop? If I do this then when it restarts will it still be running the luigi daemon and retain the state of all the tasks?
All InstanceInterruptionBehaviour=Stop does is effectively shutdown your EC2 instance rather than terminate it. Since the "persistent" setting is required in addition to EBS storage" you will keep all the data currently on the attached EBS volumes at the time of the instance stop.
It is completely dependent on the application itself (Luigi in this case) to be able to store the state of its execution and pick back up from where it left off. For one, you'll want to ensure you enable the service daemon to automatically start upon system start (example):
sudo systemctl enable yourservice

Jmeter master slave in AWS on demand

I was hoping to get some help/suggestions regarding my JMeter Master/slave test set up.
Here is my scenario:
I need to do load testing using Jmeter master slave set up. I am planning to launch the master and slave nodes on AWS (window boxes, dependency on one of the tool I launch via jmeter). I want to launch these master-slave set up in AWS on demand where I can tell how many slave nodes I want. I looked around a lot of blogs around using Jmeter with AWS and everywhere they assume these nodes will be launched manually and needs further configuration for master and slave nodes to talk to each other. For the tests where we might have 5 or 10 slave nodes this will be fine but for my tests I want to launch 50 instances(again the tool I use with jmeter has limitation that forces me to use each jmeter slave node as 1 user, instead of using 1 slave node to act as multiple users) like this and manually updating each of the slave nodes will be very cumbersome. So I was wondering if anybody else ran into this issue and have any suggestions. In the mean time I am looking into other solutions that will help me to use same slave node to mimic multiple users, which will help me to reduce the need to launch these many slave nodes.
Regards,
Vikas
Have you seen JMeter ec2 Script? It seems to be something you're looking for.
If for any reason you don't want to use particularly this script be aware that Amazon has the API to you should be able to automate instances creation by using a script AWS Java SDK or Amazon CLI.
You can even automate instances creation using a separate JMeter script with either JSR223 Sampler
or OS Process Sampler (this approach will require a separate JMeter script of course)

How to replace ECS cluster instances without downtime or reduced redundancy?

I currently have a try-out environment with ~16 services divided over 4 micro-instances. Instances are managed by an autoscaling group (ASG). When I need to update the AMI of my cluster instances, currently I do:
Create new launch config, edit ASG with new launch config.
Detach all instances with replacement option from the ASG and wait until the new ones are listed in the cluster instance list.
MANUALLY find and deregister the old instances from the ECS cluster (very tricky)
Now the services are killed by ECS due to deregistering the instances :(
Wait 3 minutes until the services are restarted on the new instances
MANUALLY find the EC2 instances in the EC2 instance list and terminate them (be very very careful not to terminate the new ones).
With this approach I have about 3 minutes of downtime and I shiver from the idea to do this in production envs.. Is there a way to do this without downtime but keeping the overall amount of instance the same (so without 200% scaling settings etc.).
You can update the Launch Configuration with the new AMI and then assign it to the ASG. Make sure to include the following in the user-data section:
echo ECS_CLUSTER=your_cluster_name >> /etc/ecs/ecs.config
Then terminate one instance at a time, and wait until the new one is up and automatically registered before terminating the next.
This could be scriptable to be automated too.

AWS EC2 Scheduling Tasks with Windows Scheduler

If I have a Amazon Redshift instance and an Amazon EC2 instance (running windows amongst other things), can I set up windows scheduled jobs in the EC2 instance that connects to Redshift and runs copy commands?
Really what I am asking is 'is EC2 just a VM on the cloud' and can I do anything I like in it (like set up windows scheduled jobs and be guaranteed they will run on a scheduled time)
It seems that AWS data pipeline is the recommended way to have scheduled jobs load data into Resdshift but this starts to get pricey with frequent jobs
I ran up a redshift instance and a EC2 windows 2012 instance
I installed the ODBC redshift driver
I ran a VBScript that incremented a counter in a table
I scheduled that script in task scheduler
I logged out of EC2 and came back and the data was updated.
So it seems that using windows scheduler on EC2 is a valid alternative to AWS data pipeline of you want to do it that way.
I haven't yet tried the copy command but I will come back and document that also if I have time

AWS EC2 - Cloudera Manager- Stopping instances

I have setup the hadoop cluster on Amazon EC2 using cloudera manager. Cloudera manager created two instances and all is working as expected. I am trying to stop the cloudera created instances through AWS console but there is no option to stop. We have only "Terminate" and "Reboot". I don't want to terminate these instances as i want to reuse these instances.
How to stop these instances ?
Since your instances came from an instance-store backed AMI you will only be able to reboot and terminate the instances. Look in the Management Console under "root device" to confirm this is the case.
To get around this, you can create an AMI from your instances then restart your environment using the new AMI which would give you the option to stop your instances.

Resources