I'm trying to work with multiple Pre-emptive VM instances on Google Compute Engine for elastic search service and facing some doubts as follows:-
Is 30sec window enough to store data from a preemptive elastic search instance to a stable VM?
How to save the state of one VM that is ending and restore it to other?
Is there an alternative to Google Autoscaler?
You could try running a 'shutdown-script' to create a snapshot with the command:
gcloud compute disks snapshot [disk_name] --zone=[zone] --snapshot-names=[snapshot_name]
Although you should manage to have different snapshot names. Using this you would have a backup of the current VM state, but there is no automatic way to create another VM from this snapshot when is created.
As far as I know there is no predefined alternative that do the same as the autoscaler. Although you could also try using the shutdown script to starts a VM.
Related
I have a spring-boot application running on a container. One of the APIs is a file upload API and every time a file is uploaded it has to be scanned for viruses. We have uvscan to scan the uploaded file. I'm looking at adding uvscan to the base image but the virus definitions need to be updated on a daily basis. I've created a script to update the virus definitions. The simplest way currently is to run a cron inside the container which invokes the script. Is there any other alternative to do this? Can the uvscan utility be isolated from the app pod and invoked from the application?
There are many ways to solve the problem. I hope, I can help you to find what suits you best.
From my perspective, it would be pretty convenient to have a CronJob that builds and pushes the new docker image with uvscan and the updated virus definition database on a daily basis.
In your file processing sequence you can create a scan Job using Kubernetes API, and provide it access to shared volume with a file you need to scan.
Scan Job will use :latest image, and if new images will appear in the registry it will download new image and create pod from it.
The downside is when you create images daily it consumes "some" amount of disk space, so you may need to invent the process of removing the old images from the registry and from the docker cache on each node of Kubernetes cluster.
Alternatively, you can put AV database on a shared volume or using Mount Propagation and update it independently of pods. If uvscan opens AV database in read-only mode it should be possible.
On the other hand it usually takes time to load virus definition into the memory, so it might be better to run virus scan as a Deployment than as a Job with a daily restart after new image was pushed to the registry.
At my place of work, we also run our dockerized services within EC2 instances. If you only need to update the definitions once a day, I would recommend utilizing an AWS Lamda function. It's relatively affordable and you don't need to worry about the overhead of a scheduler, etc. If you need help setting up the Lambda, I could always provide more context. Nevertheless, I'm only offering another solution for you in the AWS realm of things.
So basically I simply added a cron to the application running inside the container to update the virus definitions.
I'm looking to setup a demo environment in Amazon that consists of a pre-configured EC2 image that resets itself back to a snapshot configuration every hour, this is would be a Linux VM.
What would be the best way to go about doing this in EC2? Does Amazon offer any tools for scheduling and reverting to the snapshot or would this need to be done from a third party VM or software?
There is no VMWare-like 'snapshot' functionality in Amazon EC2 (where you can roll-back to a point-in-time).
The network-attached disk storage system used with Amazon EC2 is called Amazon Elastic Block Store (EBS). While EBS does have a 'snapshot' function, this actually takes a backup of an EBS Volume and stores it in Amazon S3. The snapshot can then be used to create a new EBS volume, which will contain the same contents as the original disk at the time the snapshot was created.
One option would be to launch a new Amazon EC2 instance, which will automatically create a new boot disk from the indicated Amazon Machine Image (AMI). This is the way to launch new machines with the same disk content. However, this might not lend itself well to your "revert every half hour" since it requires a new machine to be started, which will also trigger a new hourly billing cycle.
You might be able to script the deletion of files or the reload of some database tables, but this will depend upon your particular system and applications.
My organization's website is a Django app running on front end webservers + a few background processing servers in AWS.
We're currently using Ansible for both :
system configuration (from a bare OS image)
frequent manually-triggered code deployments.
The same Ansible playbook is able to provision either a local Vagrant dev VM, or a production EC2 instance from scratch.
We now want to implement autoscaling in EC2, and that requires some changes towards a "treat servers as cattle, not pets" philosophy.
The first prerequisite was to move from a statically managed Ansible inventory to a dynamic, EC2 API-based one, done.
The next big question is how to deploy in this new world where throwaway instances come up & down in the middle of the night. The options I can think of are :
Bake a new fully-deployed AMI for each deploy, create a new AS Launch config and update the AS group with that. Sounds very, very cumbersome, but also very reliable because of the clean slate approach, and will ensure that any system changes the code requires will be here. Also, no additional steps needed on instance bootup, so up & running more quickly.
Use a base AMI that doesn't change very often, automatically get the latest app code from git upon bootup, start webserver. Once it's up just do manual deploys as needed, like before. But what if the new code depends on a change in the system config (new package, permissions, etc) ? Looks like you have to start taking care of dependencies between code versions and system/AMI versions, whereas the "just do a full ansible run" approach was more integrated and more reliable. Is it more than just a potential headache in practice ?
Use Docker ? I have a strong hunch it can be useful, but I'm not sure yet how it would fit our picture. We're a relatively self-contained Django front-end app with just RabbitMQ + memcache as services, which we're never going to run on the same host anyway. So what benefits are there in building a Docker image using Ansible that contains system packages + latest code, rather than having Ansible just do it directly on an EC2 instance ?
How do you do it ? Any insights / best practices ?
Thanks !
This question is very opinion based. But just to give you my take, I would just go with prebaking the AMIs with Ansible and then use CloudFormation to deploy your stacks with Autoscaling, Monitoring and your pre-baked AMIs. The advantage of this is that if you have most of the application stack pre-baked into the AMI autoscaling UP will happen faster.
Docker is another approach but in my opinion it adds an extra layer in your application that you may not need if you are already using EC2. Docker can be really useful if you say want to containerize in a single server. Maybe you have some extra capacity in a server and Docker will allow you to run that extra application on the same server without interfering with existing ones.
Having said that some people find Docker useful not in the sort of way to optimize the resources in a single server but rather in a sort of way that it allows you to pre-bake your applications in containers. So when you do deploy a new version or new code all you have to do is copy/replicate these docker containers across your servers, then stop the old container versions and start the new container versions.
My two cents.
A hybrid solution may give you the desired result. Store the head docker image in S3, prebake the AMI with a simple fetch and run script on start (or pass it into a stock AMI with user-data). Version control by moving the head image to your latest stable version, you could probably also implement test stacks of new versions by making the fetch script smart enough to identify which docker version to fetch based on instance tags which are configurable at instance launch.
You can also use AWS CodeDeploy with AutoScaling and your build server. We use CodeDeploy plugin for Jenkins.
This setup allows you to:
perform your build in Jenkins
upload to S3 bucket
deploy to all the EC2s one by one which are part of the assigned AWS Auto-Scaling group.
All that with a push of a button!
Here is the AWS tutorial: Deploy an Application to an Auto Scaling Group Using AWS CodeDeploy
I have a Windows 2008 EC2 instance to which I have done some customizing on the EBS boot drive.
I started the instance as m1.small (or m1.large) and the instance storage does not appear as an additional drive.
I've read that the -b switch in the ec2-run-instances command allows you to create mappings for the ephymeral instance storage. The ec2-run-instances command creates a new instance, however, in my case, the instance already exists and therefore I start it as ec2-start-instances, which does not have a -b switch for ephymeral instance storage.
Is there any way I can get to the ephymeral instance storage that comes with an m1.small instance for my existing EBS-booted instance?
UPDATE: It seems that nowadays (Feb 2015) Windows machines mount ephymeral instance storage in the Z: drive.
I'm afraid this functionality isn't available (yet) for Amazon EC2, but it's a very good question in fact - the common answer used to refer to the explicated launch time requirement, see e.g. ec2-modify-instance-attribute:
Note
If you want to add ephemeral storage to an Amazon EBS-backed instance,
you must add the ephemeral storage at the time you launch the
instance. For more information, go to Overriding the AMI's Block
Device Mapping in the Amazon Elastic Compute Cloud User Guide, or to
Adding A Default Instance Store in the Amazon Elastic Compute Cloud
User Guide. [emphasis mine]
That hasn't been that much of an issue in the past, but given the recent introduction of 64-bit ubiquity implies a significant improvement of vertical scaling versatility (see EC2 Updates: New Medium Instance, 64-bit Ubiquity, SSH Client), this is suddenly a topic indeed - your question yields even more questions in turn:
What happens for the converse case, i.e. when I start a sufficiently large instance with lots of ephemeral storage and scale it down (and possibly up again) thereafter?
In case the initial block device mapping is retained somehow, should we always start with a large instance therefore? (I actually doubt that this is the case though.)
This question can only be addressed by the AWS team I guess, so you may want to file a support request or relay the question to the Amazon Elastic Compute Cloud forum at least.
I think what you're asking (but correct me if I'm wrong) is "how do I add additional storage to an EC2 instance?".
In which case, the answer is:
Select the Volumes panel in the AWS console and create a new volume of the size you want, making sure it's in the same Availability Zone as the instance you want to attach it to. Then select that new Volume, and click 'Attach' - select the instance you want to attach it to, and click OK.
Now log-on to the instance, and in Computer Management select the Disk Management plugin, format the new unassigned partition, and give it whatever drive letter you wish. It will then show up in Explorer as a standard Windows drive.
I have TeamCity (7.0.2) successfully spinning up an EC2 VM from a custom AMI, running our build, and sending back the build artifacts.
However, even when I used to do this with older TeamCity versions, I was always unhappy with the notion that it simply terminates the instances after they are done, and then creates new instances using the configured AMI next time a build agent is needed.
Can I get TeamCity to issue "stop" commands instead, followed by "start" commands? This has a tonne of advantages - quicker spin-up time, allowing for named instances in the agent stats, and saving the Mercurial clone to EBS for the next build are just three.
p.s. I guess I could use chained builds to call the EC2 API directly rather than use the in-built cloud support, but that sounds like a lot of work and feels flaky
We plan to provide support of EBS instances start stop in TeamCity 7.1
Please vote for TW-16419
TeamCity 7.0 may leak EBS volumes TW-12517