AWS Ruby SDK - Delete application and all associated environments - ruby

I'm using the AWS Ruby SDK to interact with Amazon Beanstalk. I've got applications with more or more running environments. The application names are easily known to my Ruby code, but the environment names were dynamically generated, and so aren't easily obtainable.
I hoped that the delete_application method would also terminate all running environments automatically, but the following error results from trying to delete a Beanstalk application with running environments:
Unable to delete application dsw88-test-app-prod because it has a version that is deployed to a running environment.
Deleting an application manually in the AWS console also is able to automatically remove running environments. Is there a way to easily delete an application and all its running environments using the Ruby SDK?

After more research, I don't believe this is possible. Instead, you must use the following process:
Get a list of all the environments in your application using the describe_environments call
Terminate each one of those running environments using the terminate_environment call
Once those are done (You should wait for them to finish), then you can run the delete_application call to delete your application
It would be nice if Amazon provided a way to delete all that stuff programmatically with one command (like they do in the UI), but it doesn't look like that is currently supported.

Related

Is it possible for .gcloudignore in Google Cloud to skip updating a file?

I have just started developing a Golang app, and have deployed it on Google App Engine. But, when I try to connect my local server to CloudSQL instance through proxy, I am able to connect only through TCP.
However, when connecting with the same CloudSQL instance in AppEngine, I am able to connect only through UNIX.
To cope with this, I have made changes in my local environment handler file, so that it can adapt to local and GCloud config, but I'm not sure how I can skip the update on just this file for GCloud? Again, I don't want AppEngine to delete this file, I just want the CLI to avoid uploading the new version of the handler file.
I use this command for deploying: gcloud app deploy
Currently, I deploy directly to AppEngine, instead of pushing it through VCS. Also, if there is an option to detect if the app is running on AppEngine, then it'd be really great.
TIA
Got it, in case anyone gets stuck in such situation, we can make use of environment variables set in GCloud AppEngine. Although there is documentation stating the environment variables, I would still give importance to checking the environment variables in Cloud Console.
Documentation link for Go 1.12+ Runtime env:
https://cloud.google.com/appengine/docs/standard/go/runtime

Zabbix multiple agent instances of different versions

I was wondering if there is anyway that different versions of zabbix agent can runned on a windows server.
The documentation mentions something about multiple "intances" but doesnt look like this is creating any more services.
I've tried running version 3.2 alongside 2.4 on a test server but only one service can run at a time, if I try and start service 2, I'll get:
As you can see from the screen shoot the service have different names and calling diferent version of the executable.
Both services run, just not at the same time.
You can't run multiple agents on the same ListenPort. Use different ports per agent instance.

Where are my application files after deploy to google compute engine?

I'm following the tutorial to deploy a ruby app to google compute engine. Everything works, however I now want to ssh into the app to run migrations etc. After some searching i was able to find my files under a docker instance here /var/lib/docker/aufs/diff/e2972171505a931749490e13d21e4f8c0bb076245ef4b592aff6667c45b2dd13/app
Is there a simpler way to access my files? perhaps a symlinked folder?
Ruby apps on Google AppEngine run via Docker. Because AppEngine is a PaaS provider, it's discouraged (though possible) to run commands on production machines. If you'd like to run database migrations, please run them locally and point your configuration at your production database.

best way to bundle update on server while booting

I have an AMI which has configured with production code setup.I am using Nginx + unicorn as server setup.
The problem I am facing is, whenever traffic goes up I need to boot the instance log in to instance and do a git pull,bundle update and also precompile the assets.Which is time consuming.So I want to avoid all this process.
Now I want to go with a script/process where I can automate whole deployment process, like git pull, bundle update and precompile as soon as I boot a new instance from this AMI.
Is there any best way process to get this done ? Any help would be appreciated.
You can place your code in /etc/rc.local (commands in this file will be executed when server will be loaded).
But the best way is using (capistrano). You need to add require "capistrano/bundler" to your deploy.rb file, and bundle update will be runned automatically. For more information you can read this article: https://semaphoreapp.com/blog/2013/11/26/capistrano-3-upgrade-guide.html
An alternative approach is to deploy your app to a separate EBS volume (you can still mount this inside /var/www/application or wherever it currently is)
After deploying you create an EBS snapshot of this volume. When you create a new instance, you tell ec2 to create a new volume for your instance from the snapshot, so the instance will start with the latest gems/code already installed (I find bundle install can take several minutes). All your startup script needs to do is to mount the volume (or if you have added it to the fstab when you make the ami then you don't even need to do that). I much prefer scaling operations like this to have no dependencies (eg what would you do if github or rubygems have an outage just when you need to deploy)
You can even take this a step further by using amazon's autoscaling service. In a nutshell you create a launch configuration where you specify the ami, instance type, volume snapshots etc. Then you control the group size either manually (through the web console or the api) according to a fixed schedule or based on cloudwatch metrics. Amazon will create or destroy instances as needed, using the information in your launch configuration.

How to update new version of application in a running Ubuntu EC2 machine programmatically

How can I update the new versions of application in my ubuntu machines in private subnet which are in Autoscaling. I am using cloudfromation script to bring up the entire set up. Can I include any script in cloudfromation template to do this? Please help.
There are a few ways to accomplish the upgrade. Many people use amazon's provided script, cfn-hup. The way I do it is different and is as follows:
When instance launches, have script install application from files fetched from S3
Update S3 with new version.
Use a script(or manually) to shutdown instances one at a time, waiting for autoscaling to bring them up with the new version installed.

Resources