I am not able to update AMI on my launch configuration. Here is the error I see
and it rolls back. I checked the reason on google and it should work fine even if there are running instances, just that it won't update the already running instances. But, it does not work like that in our case. Please suggest!
Related
I have a SpringBoot application deployed in PCF scaled to 12 instances. One or two instances are going down. I want to restart those 2 instances instead of restarting the application automatically. How do you restart a single instance automatically in PCF when it is going down or when it crashed?
I am using the following command to restart a single instance manually:
cf restart-app-instance APP_NAME INDEX
There is nothing you need to do. The Cloud Foundry platform will monitor your application via the configured health check and automatically restart it.
If that's not happening, you would want to look into why it's not being restarted automatically and not try to hack together some other way of restarting it.
Can you try something.
Delete the instance at the index:
cf curl DELETE /v2/apps/$(cf app APP_NAME --guid)/instances/INDEX_OF_THE_INSTANCE
Reference: http://apidocs.cloudfoundry.org/272/apps/terminate_the_running_app_instance_at_the_given_index.html
Wait for sometime, so that CF can recreate the instance.
I have not tried this in my lab, and if I get any update I will post it here in comments.
Thank,
Chandan Patra
I am using Foreman 1.6 and using AWS EC2 as compute resource.
Problem is, Foreman could not able to resolve the finish template when the user-data of image is enabled. And, I could not able to provision the VM.
When user-date of image is disabled, foreman able to resolve the finish-template and able to provision the vm (Without reading the template, i.e puppet client installation).
Could you guide me where I am going wrong? Its been two week I am struggling with this issue.
Thanks,
Sekhar
You need to create a new provisioning script of type "user-data" (or just use the "Kickstart default user data" and associate it to your OS. Finish scripts are not the right "kind" for cloud-init.
I have an AMI which has configured with production code setup.I am using Nginx + unicorn as server setup.
The problem I am facing is, whenever traffic goes up I need to boot the instance log in to instance and do a git pull,bundle update and also precompile the assets.Which is time consuming.So I want to avoid all this process.
Now I want to go with a script/process where I can automate whole deployment process, like git pull, bundle update and precompile as soon as I boot a new instance from this AMI.
Is there any best way process to get this done ? Any help would be appreciated.
You can place your code in /etc/rc.local (commands in this file will be executed when server will be loaded).
But the best way is using (capistrano). You need to add require "capistrano/bundler" to your deploy.rb file, and bundle update will be runned automatically. For more information you can read this article: https://semaphoreapp.com/blog/2013/11/26/capistrano-3-upgrade-guide.html
An alternative approach is to deploy your app to a separate EBS volume (you can still mount this inside /var/www/application or wherever it currently is)
After deploying you create an EBS snapshot of this volume. When you create a new instance, you tell ec2 to create a new volume for your instance from the snapshot, so the instance will start with the latest gems/code already installed (I find bundle install can take several minutes). All your startup script needs to do is to mount the volume (or if you have added it to the fstab when you make the ami then you don't even need to do that). I much prefer scaling operations like this to have no dependencies (eg what would you do if github or rubygems have an outage just when you need to deploy)
You can even take this a step further by using amazon's autoscaling service. In a nutshell you create a launch configuration where you specify the ami, instance type, volume snapshots etc. Then you control the group size either manually (through the web console or the api) according to a fixed schedule or based on cloudwatch metrics. Amazon will create or destroy instances as needed, using the information in your launch configuration.
As the new autoscaling functionality became available in windows azure a few weeks ago i enabled this on my service and everything was great.
Then I deleted the deployment and deployed it again. And now I get an error saying that "Autoscale failed for [role name]" and I can only manually scale it.
I also tried deleting the service alltogether and recreate it from cratch with no improvments. This is the only service that I have this problem with and if I deploy the same solution to Another service it works.
Does anyone know how to get around this?
I have an already running linux instance. I right clicked on that instance and created an image (EBS AMI), I entered the details, and a few minutes later I had my AMI listed in the Images -> AMIs section of the ec2 console.
I right clicked on this API, and requested spot pricing instance, filled in the form and selected the correct security group. It created fine and status checks were green, 2/2.
However when I tried to connect to this new instance, I would just get an error
ssh_exchange_identification: read: Connection reset by peer
I checked I was specifying the path to my key file, I checked the security group and it was all fine. I deleted the SSH rule and re-applied it, still failed.
I logged out and logged into my other instance (which this new one was based off), no issues with that one. I deleted my new spot instance, and created another based off my AMI. Same issue.
I then created a new instanced based off Ubuntu instance, and was able to login fine.
So for some reason, I can't login to an instance which I have created based of an AMI I have created via the GUI console.
I removed the old AMI and re-created it.
This time however I un-mounted (an removed from fstab) another EBS drive which I didn't want used with the AMI.
Seems to work now.
Creating a Custom AMI from the AWS Management Console
I had a situation where I could not ssh into my new instance which was launched from a custom AMI I created via the AWS Dashboard
I found that it is safer not to generate an AMI from a running instance even though the no Reboot is left unchecked. It is more reliable to stop the running instance manually and then create the image.
On the face of it, leaving no Reboot unchecked appears unreliable, as it is supposed to stop the running instance, take an image copy, and then restart the instance on your behalf, but there may perhaps be some problems with this process happening properly.