AWS Chainlink Quickstart Error - S3 Error Access Denied - chainlink

I am getting the following error when executing the Quickstart for Chainlink during the AuroraStack execution.
S3 Error: Access Denied
Not very friendly error so I went over to the yaml file.
https://aws-quickstart.s3.us-east-1.amazonaws.com/quickstart-chainlinklabs-chainlink-node/submodules/quickstart-amazon-aurora-postgresql/templates/aurora_postgres.template.yaml
And gave it a read but I really don't see anything that even has it hitting an S3 storage resource.
The error would lead me to believe that the previous YAML file that calls the above is being called and can't even reach the S3 file for Aurora.
Anyone else seen / resolved this issue?
Any ideas are appreciated?
Thanks!
Chris
Ultimately, it was an S3 access issue to the Aurora file inside the chainlink quickstart. I setup an S3 bucket, redid the files and gave myself permission and it worked fine.

Ultimately, it was an S3 access issue to the Aurora file inside the chainlink quickstart. I setup an S3 bucket, redid the files and gave myself permission and it worked fine.

Related

Lambda Management getting timed out

I'm trying to build a Alexa skill in python. And for the same I compressed all my files into a zip. But when I try to upload the zip file to lambda I'm getting this error:
timeout of 61000ms exceeded
And I'm not even able to save my files to lambda Management.
I tried deleting the function and recreating another but no help. I even tried making another account for the same purpose but still not getting anywhere.
Is there any bug in aws lambda management for python runtime? or is it my code? (But I'm not even able to execute my code).
I faced the similar issue while doing this.
I had to increase the timeout to rectify it.
However, I would suggest that you use S3 for zip file storage and the provide the S3 URL on the Lambda Management Console.
Set the timeout as in the following screenshot
OR
Upload to S3 first and give the S3 URL as follow

trying to use Amazon S3 on Ggost running on Heroku to store all the images on it instead of storing them locally

I've been trying to set up Ghost storage adapter S3 on my 1.7 ghost installation, but I must to be missing something on the way. I created a bucket with policies what are allowing access to IAM user previously created and with AmazonS3FullAccess permissions, so far so good, Ive got added the lines into config.production.json with AccessKey and secretkey from IAM as readme says but its not working properly. I attach a report screen from heroku logs
Well, I couldn't find how to fix it on 1.7 version but after updating Ghost to 1.21.1 it's working right.

Where is Deploybot pushing my repo to on AWS EC2?

This is my setup;
Bitbucket Repo of HTML docs.
Elastic Beanstalk Environment
EC2 c3 Instance (8GB Elastic Block Store attached)
So I connect Deploybot to Elastic Beanstalk successfully and deploy. Path is default.
Success, so it seems.
output Creating application archive for Elastic Beanstalk.
output Application archive created.
output Uploading application archive to S3 bucket deploybot-elastic-beanstalk-> mysite-997c5d66.
output Application archive uploaded.
output Creating application version mysite-fbba70de-5e4736.
output Application version created.
output Updating environment e-txarhpt4wp with new version.
output Environment was updated and will now be refreshing.
But no... where are the files?
I drop in with filezilla (SFTP) and cannot find anywere on the server.
Moreover, my path is actually;
var/www/vhosts/my.site.yay/html/
If I change the path in the Deploybot environment settings - the repo never successfuly deploys, instead all I get is 'bypassed' with every single git push which indicates to me that deploybot is not acutally connecting to anything and thus constantly sees 'no changes'.
Anyone got any clues?
I have spent several hours searching prior to this post and there is almost nothing written about using deploybot with aws besides official deploybot documents.
thanks in advance to those with potential answers
Doh!
I had set up my ec2 instance to use Nginx instead of Apache - deleting Apache (httpd).
In the process of writing this question I looked more closely at the deploybot log and traced that Deploybot pushs a .zip to a S3 bucket which it creates and then triggers an elastic beanstalk environment refresh - using whatever built in webhook there must be.
So whatever webhook that uses looks for Apache (httpd) and the whole thing fails, as revealed in the Beanstalk environment event logs;
ERROR
[Instance: i-0dc350d4f82711400] Command failed on instance. Return code: 1 Output: (TRUNCATED)...tory # rb_sysopen - /etc/httpd/conf/httpd.conf (Errno::ENOENT) from /opt/elasticbeanstalk/support/php_apache_env:81:in update_apache_env_vars' from /opt/elasticbeanstalk/support/php_apache_env:125:in' Using configuration value for DocumentRoot:. Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/05_configure_php.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
So, I switched my deploybot to SFTP with public key and it works. (I'm not clever enough to edit/write my own beanstalk envior refresh webhook yet)

How can i upload image files to aws ec2 instance?

My web application server on AWS ec2 instance.
And using MEAN stack.
I'd like to upload image to ec2 instance.(ex - /usr/local/web/images)
I can't found that how can i get the credentials.
There are just about AWS S3.
How can i upload image file to ec2 instance?
If you do file transfer repeatedly try, unison. It is bidirectional, kind of sync. Allows options to handle conflicts.
I've found the easiest way to do this as a one-off is to upload the file to google drive, and then download the file from there. View this thread to see how simply this can be done!

serving files from s3

I need to take the advantage of caching files at s3, I decided to put mu swf files at amazon bucket, but i get forbidden error at log of chrome
How can I have a swf file hosted at heroku to load swf file hosted at amazon bucket?
If I understand things correctly you might have an issue with crossdomain loading (crossdomain.xml). Please see this link for an elaborate answer on how to solve it: Writing Flash crossdomain.xml for Amazon S3

Resources