"There are no OCR keys; creating a new key encrypted with given password" Crashes when running Chainlink node - chainlink

I am setting up a chainlink node in AWS ec2 + AWS RDS (PostgreSQL) and have followed every step in the documentation (https://docs.chain.link/docs/running-a-chainlink-node/).
Everything runs smoothly until the OCR keys creation step. Once it gets here, it shows "There are no OCR keys; creating a new key encrypted with given password". This is supposed to happen but the docker container exits right after (see image below).
Output after OCR keys creation
I have tried the following:
Checking whether there is a problem with the specific table these keys are stored in the PostgreSQL database: public.encrypted_ocr_key_bundles, which gets populated if this step succeeds. Nothing here so far.
Using a different version of the Chainlink docker image (see Chainlink Docker hub). I am currently using version 0.10.0. No success either, even if using latest ones.
Using AWS Cloudformation to "let AWS + Chainlink" take care of this, but even so I have encountered similar problems, so no success.
I have thought about populating the OCR table manually with a query, but I am far from having proper OCR key generation knowledge/script in hand so I do not like this option.
Does anybody know what else to try/where the problem could be?
Thanks a lot in advance!

UPDATE: It was a simple memory problem. The AWS micro instance (1GB RAM) was running out of memory when OCR keys were generated. I only got a log of the error after switching to an updated version of the CL docker image. In conclusion: migrate to a bigger instance. Should've thought of that but learning never stops!

Related

Using ACCCEPTINVCHARS with a remote host

I am using a scraper and uploading my data into redshift using EC2. I would prefer not to upload the data into S3 first. My code is in Jupyter Notebook. However, I get the "String contains invalid or unsupported UTF8 codepoints. Bad UTF8 hex sequence: 80 (error 3)" error that I see a lot of other people have asked about previously. I even found a page on redshift that walks through using a Remote Desktop. However, as I said before I would prefer not going through S3. Is this possible?
Currently using psycopg2 to connect to the database. I figured it wouldn't work but I tried just putting in acceptinvchars after the database user/password line, and it said that ACCEPTINVCHARS isn't defined.
If you want to copy data to Redshift right from your notebook you have to compose valid INSERT statements and execute them against the existing table in Redshift. However, throughput of this approach is quite low. I don't know how much data you plan to write but I guess scrappers should have higher throughput than that. You can first write the output of your Python script to the same EC2 instance and use COPY command.
More info on copying from an EC2 instance here: COPY from Remote Host (SSH)
As for your error, you likely have accented letters in your input and you need to use LATIN1 encoding everywhere

Rubber stalling while executing `bundle:install'

A rubber deployment as per quick start instruction using the latest 3.1.0 version reaches the stage of fetching and installing the gems (the last one loaded is pg), for an m1.small instance. I see no mention of therubyracer in the scroll of gems...
The process successfully completes deploy:setup, rubber:collectd:bootstrap, deploy:setup, deploy:update_code, but upon deploy:finalize_update the callback being triggered is bundle:install
Invariably, the process stalls at this point. The /etc/hosts/ file does refer to the proper configurations (52.25.135.252 production.foo.com ec2-52-25-135-252.us-west-2.compute.amazonaws.com ip-172-[...]).
One oddity is that trying to ssh into the the instance
ssh -i aws-eb production.foo.com
or via the ec-2 user
ssh -i aws-eb ec2-user#ec2-52-25-135-252.us-west-2.compute.amazonaws.com
the access is
Permission denied (publickey).
for a key that I was using with elastic beanstalk until a few days ago and had inserted into the confg/rubber/rubber.yml file.
I will attempt with a new key pair, but how can a key be now deemed public and unusable?
update
setting up a new keypair does not alter any behaviour. Process stuck at same point, cannot ssh into the instance. The production.foo.com does properly return, what is configured to this point, the nginx on ubuntu welcome page
As far as I can tell, having iterated about 10 times over this, memory of the instance is the issue.
The smallest instance that has not choked at this point is image_type: m3.medium. AMIs per instance type can be found here
The automatic suggestion of an m1.small in the vulcanization of the application is optimistic in my opinion.

Apache Whirr on EC2 with custom AMI

I am trying to launch a cluster of custom AMI images. AMI image is just Ubunutu 12.04 server image from Amazon free tier selection with Java installed (I actually want to create AMI with numpy and scipy). In fact, I created that image by launching the Ubuntu 12.04 instance with whirr and noop as a role. Then I installed Java, and in AWS online Console selected Create Image (EBS AMI). I am using same whirr recipe script I used to launch original ubuntu server with only image-id changed.
Whirr launches the image, it shows up in the console. Then it tries to run InitScript for noop and nothing happens. After 10min it throws exception caused by script running for too long. whirr.log containts record
error acquiring SFTPClient() (out of retries - max 7): Invalid packet: indicated length 1349281121 too large
I saw this error mentioned in one of the tutorials, suggested solution was to add line
whirr.bootstrap-user=ec2-user
to let JCloud know the username. I know this is the correct username and was used by default anyway. After adding the line, whirr.log shows authentification error, problem with public key.
Finally, when I use 'ubuntu' as user, the error is
Dying because - java.net.SocketTimeoutException: Read timed out
Here's file I use to launch the cluster
whirr.cluster-name=pineapple
whirr.instance-templates=1 noop
whirr.provider=aws-ec2
whirr.identity=${env:AWS_ACCESS_KEY_ID}
whirr.credential=${env:AWS_SECRET_ACCESS_KEY}
whirr.private-key-file=${sys:user.home}/.ssh/id_rsa
whirr.public-key-file=${sys:user.home}/.ssh/id_rsa.pub
whirr.env.repo=cdh4
whirr.hardware-id=t1.micro
whirr.image-id=us-east-1/ami-224cda4b
whirr.image-location=us-east-1b
The exception log will help us to solve your problem.
Also, setting the following may solve your issue.
whirr.cluster-user=<Clu>

nothing happens on libcloud create_node

I use libcloud to create a new node instance Amazon EC2.
conn.create_node returns me a valid instance and printing node.dict shows the expected values.
however when I check my EC2 dashboard the new machine does not appear there.
do I need my python app to stay open so that the node is actually created?
found it: the instance was actually created but for some reason the amazon did not show it even after refresh. logging out and in solved it.

AWS console not showing all instances during volume attach

I do the following using AWS web console:
Attach EBS volume-A to instance-A. Make some changes to data on volume-A and detach it
Launch new instance-B (in the same zone as instance-A)
Try attach volume-A to the new instance-B. But the new instance does not appear in the instances list during attach volume process (dialog box).
If I try the same attach using command line EC2 API (volume-A and instance-B), it works fine!
Do you know if this is a bug in AWS web console or am I doing something wrong in the console? Tried page refresh in Step #3 but it still would not list the new instance.
In order to attach, both volumes has to be in the same zone. So if you are going to attach a volume into a instance check the zone of the instance's attached volume. If those are not matching create a new instance with the same zone as the zone of the volume that you need to attached.
The volume and the instance have to be in the same region AND the same zone.
If you have a volume in us-east-1a and the instance in us-east-1b, you would need to move the volume to us-east-1b to make it work.
Even I had faced this problem yesterday and a day before. It looks like Amazon problem with their cache. Not sure WHY.
To bring back the stuff as is, I had to sign-out and make sure things are good. But it's always good to work with CLI, works better.
Although the user interface may not list the instance ID, you can attempt to add the volume anyway. If it's genuinely impossible (rather than a cache issue) you will get an error message.
Paste in the instance ID (i-xxxxxxx) manually then type your mount point (e.g. /dev/sdf) and click Attach.
For the benefit of others: some instance types do not support encrypted volumes, which may be why the instance doesn't appear in the list. I get the following error:
Error attaching volume: 'vol-12341234' is encrypted and 't2.medium' does not support encrypted volumes.

Resources