Rubber stalling while executing `bundle:install' - amazon-ec2

A rubber deployment as per quick start instruction using the latest 3.1.0 version reaches the stage of fetching and installing the gems (the last one loaded is pg), for an m1.small instance. I see no mention of therubyracer in the scroll of gems...
The process successfully completes deploy:setup, rubber:collectd:bootstrap, deploy:setup, deploy:update_code, but upon deploy:finalize_update the callback being triggered is bundle:install
Invariably, the process stalls at this point. The /etc/hosts/ file does refer to the proper configurations (52.25.135.252 production.foo.com ec2-52-25-135-252.us-west-2.compute.amazonaws.com ip-172-[...]).
One oddity is that trying to ssh into the the instance
ssh -i aws-eb production.foo.com
or via the ec-2 user
ssh -i aws-eb ec2-user#ec2-52-25-135-252.us-west-2.compute.amazonaws.com
the access is
Permission denied (publickey).
for a key that I was using with elastic beanstalk until a few days ago and had inserted into the confg/rubber/rubber.yml file.
I will attempt with a new key pair, but how can a key be now deemed public and unusable?
update
setting up a new keypair does not alter any behaviour. Process stuck at same point, cannot ssh into the instance. The production.foo.com does properly return, what is configured to this point, the nginx on ubuntu welcome page

As far as I can tell, having iterated about 10 times over this, memory of the instance is the issue.
The smallest instance that has not choked at this point is image_type: m3.medium. AMIs per instance type can be found here
The automatic suggestion of an m1.small in the vulcanization of the application is optimistic in my opinion.

Related

"There are no OCR keys; creating a new key encrypted with given password" Crashes when running Chainlink node

I am setting up a chainlink node in AWS ec2 + AWS RDS (PostgreSQL) and have followed every step in the documentation (https://docs.chain.link/docs/running-a-chainlink-node/).
Everything runs smoothly until the OCR keys creation step. Once it gets here, it shows "There are no OCR keys; creating a new key encrypted with given password". This is supposed to happen but the docker container exits right after (see image below).
Output after OCR keys creation
I have tried the following:
Checking whether there is a problem with the specific table these keys are stored in the PostgreSQL database: public.encrypted_ocr_key_bundles, which gets populated if this step succeeds. Nothing here so far.
Using a different version of the Chainlink docker image (see Chainlink Docker hub). I am currently using version 0.10.0. No success either, even if using latest ones.
Using AWS Cloudformation to "let AWS + Chainlink" take care of this, but even so I have encountered similar problems, so no success.
I have thought about populating the OCR table manually with a query, but I am far from having proper OCR key generation knowledge/script in hand so I do not like this option.
Does anybody know what else to try/where the problem could be?
Thanks a lot in advance!
UPDATE: It was a simple memory problem. The AWS micro instance (1GB RAM) was running out of memory when OCR keys were generated. I only got a log of the error after switching to an updated version of the CL docker image. In conclusion: migrate to a bigger instance. Should've thought of that but learning never stops!

How can I start an instance from the new EC2 CLI?

I have an ec2 micro instance. I can start it from the console, ssh into it (using a .pem file) and visit the website it hosts.
Using the old ec2 CLI, I can start the instance and perform other actions including ssh and website access.
I am having trouble with the new ec2 CLI. When I do "aws ec2 start-instances --instance-ids i-xxx" I get the message "A client error (InvalidInstanceID.NotFound) occurred when calling the StartInstances operation: The instance ID 'i-xxx' does not exist".
Clearly the instance exists, so I don't what the message is really indicating.
Here is some of my thinking:
One difference between the old and new CLI is that the the later used .pem files whereas the new interface uses access key pairs. The instance has an access key pair associated with is, but have tried all the credentials I can find, and none of them change anything).
I tried created an IAM user and a new access key pair for it. The behavior in all cases is unchanged (start from console or old CLI, web access, ssh) but not using the new CLI.
I realize that there is a means for updating the access key pairs by detaching the volume (as described here), but the process looks a little scary to me.
I realize that I can clone another instance from the image, but the image is a little out of date, and I don't want to lose my changes.
Can anyone suggest what the message really means and what I can do to get around the problem?
The problem had to do with credentials. I had the correct environment
variables (AWS_ACCESS_KEY and AWS_SECRET_KEY) set. But they didn't match what was in my .aws/credentials file. That is, despite what it says here, the new CLI worked only when I had the environment variables and
the credentials file correct and in sync.
Configure your aws cli with "aws configure" in new cli instance with region in which your ec2 instance reside. and then try to give the same command. The instance should start.

Installed Zone Alarm on Amazon EC2 Windows Instance and cannot access now. How do I fix this?

I messed up this.
Installed ZoneMinder and now I cannot connect to my VPS via Remote Desktop, it must probably have blocked connections. Didnt know it will start blocking right away and let me configure it before.
How can I solve this?
Note: My answer is under the assumption this is a Windows instance due to the use of 'Remote Desktop', even though ZoneMinder is primarily Linux-based.
Short answer is you probably can't and will likely be forced to terminate the instance.
But at the very least you can take a snapshot of the hard drive (EBS volume) attached to the machine, so you don't lose any data or configuration settings.
Without network connectivity your server can't be accessed at all, and unless you've installed other services on the machine that are still accessible (e.g. ssh, telnet) that could be used to reverse the firewall settings, you can't make any changes.
I would attempt the following in this order (although they're longshots):
Restart your instance using the AWS Console (maybe the firewall won't be enabled by default on reboot and you'll be able to connect).
If this doesn't work (which it shouldn't), you're going to need to stop your crippled instance, detach the volume, spin up another ec2 instance running Windows, and attach the old volume to the new instance.
Here's the procedure with screenshots of the exact steps, except your specific steps to disable the new firewall will be different.
After this is done, you need to find instructions on manually uninstalling your new firewall -
Take a snapshot of the EBS volume attached to it to preserve your data (essentially the C:), this appears on the EC2 console page under the 'volumes' menu item. This way you don't lose any data at least.
Start another Windows EC2 instance, and attach the EBS volume from the old one to this one. RDP into the new instance and attempt to manually uninstall the firewall.
At a minimum at this point you should be able to recover your files and service settings very easily into the new instance, which is the approach I would expect you to have more success with.

Apache Whirr on EC2 with custom AMI

I am trying to launch a cluster of custom AMI images. AMI image is just Ubunutu 12.04 server image from Amazon free tier selection with Java installed (I actually want to create AMI with numpy and scipy). In fact, I created that image by launching the Ubuntu 12.04 instance with whirr and noop as a role. Then I installed Java, and in AWS online Console selected Create Image (EBS AMI). I am using same whirr recipe script I used to launch original ubuntu server with only image-id changed.
Whirr launches the image, it shows up in the console. Then it tries to run InitScript for noop and nothing happens. After 10min it throws exception caused by script running for too long. whirr.log containts record
error acquiring SFTPClient() (out of retries - max 7): Invalid packet: indicated length 1349281121 too large
I saw this error mentioned in one of the tutorials, suggested solution was to add line
whirr.bootstrap-user=ec2-user
to let JCloud know the username. I know this is the correct username and was used by default anyway. After adding the line, whirr.log shows authentification error, problem with public key.
Finally, when I use 'ubuntu' as user, the error is
Dying because - java.net.SocketTimeoutException: Read timed out
Here's file I use to launch the cluster
whirr.cluster-name=pineapple
whirr.instance-templates=1 noop
whirr.provider=aws-ec2
whirr.identity=${env:AWS_ACCESS_KEY_ID}
whirr.credential=${env:AWS_SECRET_ACCESS_KEY}
whirr.private-key-file=${sys:user.home}/.ssh/id_rsa
whirr.public-key-file=${sys:user.home}/.ssh/id_rsa.pub
whirr.env.repo=cdh4
whirr.hardware-id=t1.micro
whirr.image-id=us-east-1/ami-224cda4b
whirr.image-location=us-east-1b
The exception log will help us to solve your problem.
Also, setting the following may solve your issue.
whirr.cluster-user=<Clu>

AWS EC2: Instance from my own Windows AMI is not reachable

I am windows user and wanted to use a spot instance using my own EBS windows AMI. For this I have followed these steps:
I had my own on-demand instance with specific settings
Using AWS console I used option "Create Image EBS" to create EBS based windows AMI. IT worked and AMI created successfully
Then using this new AMI I launched a spot medium instance that was created well and now running with status checks passed.
After waiting an hour or more I am trying to connect it using windows 7 RDC client but is not reachable with client tool's standard error that either computer is not reachable or not powered on.
I have tried to achieve this goal and created/ deleted many volums, instances, snapshots everything but still unsuccessful. Doesn't anybody else have any solution to this problem?
Thanks
Basically what's happening is that the existing administrator password (and other user authentication information) for Windows is only valid in the original instance, and can't be used on the new "hardware" that you're launching the AMI on (even though it's all virtualized).
This is why RDP connections will fail to newly launched instances, as will any attempts to retrieve the administrator password. Unfortunately you have no choice but to shut down the new instances you've been trying to connect to because you won't be able to do anything with them.
For various reasons the Windows administrator password cannot be preserved when running a snapshot of the operating system on different hardware (even virtualized hardware) - this is a big part of the reason why technologies like Active Directory exist, so that user authentication information is portable between machines and networks.
It sounds like you did all the steps necessary to get this working except for one - you never took any steps to cause a new password to be assigned to your newly-launched instances based on the original AMI you created.
To fix this, BEFORE turning your instance into a custom AMI that can be used to launch new instances, you need to (in the original instance) run the Ec2ConfigService Settings tool (found in the start menu when remoted into the original instance using RDP), and enable the option to generate a new password on next reboot. Save this setting change.
Then when you do create an AMI from the original instance, and use that AMI to launch new instances, they will each boot up to your custom Windows image but will choose their own random administrator password.
At this point you can go to your ec2 dashboard and retrieve the newly-generated password for the new instance based on the old AMI, and you'll also be able to download the RDP file used to connect to it.
One additional note is that Amazon warns that it can take upwards of 30 minutes for the retrieval of an administrator password after launching a new instance, however in my previous experience I've never had to wait more than a few minutes to be able to get it.
Your problem is most likely that the security group you used to launch the AMI does not have RDP (TCP port #3389) enabled.
When you launch the windows AMI for the first time, AWS will populate the quicklaunch with this port enabled. However, when you launch the subsequent AMI, you will have to ensure that this port is open for your security group.

Resources