trying to use Amazon S3 on Ggost running on Heroku to store all the images on it instead of storing them locally - heroku

I've been trying to set up Ghost storage adapter S3 on my 1.7 ghost installation, but I must to be missing something on the way. I created a bucket with policies what are allowing access to IAM user previously created and with AmazonS3FullAccess permissions, so far so good, Ive got added the lines into config.production.json with AccessKey and secretkey from IAM as readme says but its not working properly. I attach a report screen from heroku logs

Well, I couldn't find how to fix it on 1.7 version but after updating Ghost to 1.21.1 it's working right.

Related

Parse- Server Transferring files hosted on Heroku to AWS S3 bucket

I have parse-server running on Heroku. When I first created this app, I didn't specify a files adapter in index.js, so all uploaded files have been getting stored on Heroku.
So I have now run out of room and I have set up an AWS S3 bucket to store my files on. This is working fine expect for the fact that any files which were originally stored on Heroku can no longer be accessed through the application.
At the moment I am thinking about looping through all objects which have a relation to a file stored on heroku, then uploading that file to S3 bucket. Just hoping that there may be some tool out there or that someone has an easier process for doing this.
thanks
There are migration guides for migrating parse server itself but I don't see anything in the documentation for migrating hosted files unfortunately.
I did find one migration tool but it appears to still utilize the previous file adapter (on your heroku instance) and then stores anything new on the new adapter (s3 storage).
parse-server-migrating-adapter

Where is Deploybot pushing my repo to on AWS EC2?

This is my setup;
Bitbucket Repo of HTML docs.
Elastic Beanstalk Environment
EC2 c3 Instance (8GB Elastic Block Store attached)
So I connect Deploybot to Elastic Beanstalk successfully and deploy. Path is default.
Success, so it seems.
output Creating application archive for Elastic Beanstalk.
output Application archive created.
output Uploading application archive to S3 bucket deploybot-elastic-beanstalk-> mysite-997c5d66.
output Application archive uploaded.
output Creating application version mysite-fbba70de-5e4736.
output Application version created.
output Updating environment e-txarhpt4wp with new version.
output Environment was updated and will now be refreshing.
But no... where are the files?
I drop in with filezilla (SFTP) and cannot find anywere on the server.
Moreover, my path is actually;
var/www/vhosts/my.site.yay/html/
If I change the path in the Deploybot environment settings - the repo never successfuly deploys, instead all I get is 'bypassed' with every single git push which indicates to me that deploybot is not acutally connecting to anything and thus constantly sees 'no changes'.
Anyone got any clues?
I have spent several hours searching prior to this post and there is almost nothing written about using deploybot with aws besides official deploybot documents.
thanks in advance to those with potential answers
Doh!
I had set up my ec2 instance to use Nginx instead of Apache - deleting Apache (httpd).
In the process of writing this question I looked more closely at the deploybot log and traced that Deploybot pushs a .zip to a S3 bucket which it creates and then triggers an elastic beanstalk environment refresh - using whatever built in webhook there must be.
So whatever webhook that uses looks for Apache (httpd) and the whole thing fails, as revealed in the Beanstalk environment event logs;
ERROR
[Instance: i-0dc350d4f82711400] Command failed on instance. Return code: 1 Output: (TRUNCATED)...tory # rb_sysopen - /etc/httpd/conf/httpd.conf (Errno::ENOENT) from /opt/elasticbeanstalk/support/php_apache_env:81:in update_apache_env_vars' from /opt/elasticbeanstalk/support/php_apache_env:125:in' Using configuration value for DocumentRoot:. Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/05_configure_php.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
So, I switched my deploybot to SFTP with public key and it works. (I'm not clever enough to edit/write my own beanstalk envior refresh webhook yet)

Laravel filesystem disk update from ui

I am trying to use https://github.com/spatie/laravel-backup package to manage backups for my app. I have successfully integrated that and can make a backup to local or s3 disk.
I would like to add the ability for the admin to be able to change the s3 credentials (KEY/secrets) from the admin panel. I am confused here how to get that done? Please guide me how can I change/modify these credentials from the Admin panel. I am a newbie. What I would like is a UI interface to connect S3 bucket to the app and be able to update and link a new S3 account.
the app will use this to store its backups

aws with eclipse cannot find keypair

I just installed the aws eclipse toolkit. I am having problems with the aws toolkit key pairs. When I go to the eclipse->windows->preferences->awstoolkit->key pairs I find no icons or names of my keypairs.pem file. I first downloaded all of the eclipse aws software modules with no issue. I did not include the android module as instructed. I downloaded the credentials and keypairs files from my ec2 instance. I placed the credentials file in .aws directory. I then filled out the aws toolkit credentials window and included in the default profile details a copy/paste of my access key id and secret access key. I think eclipse is seeing it because I noticed that after I rebooted eclipse it created another credentials file with the id and secret key reformatted. I placed the my keypair.pem in the .ec2 directory. Like I said earlier, when I go to the preferences->key pairs window there is nothing in the name field and I cannot associate my private keys with my amazon ec2 key pairs. Any help would be welcome...Best Regards.
Ok, I finally figured it out.
I am new to AWS and did not understand that I had to first set up an IAM user and group and enable the necessary policies. I found a pretty good video that goes over user/group settings.
Once I set up the policies I noticed that eclipse had immediate access to the aws cloud resources. When I clicked the eclipse windows->preferences->keypairs my aws keypairs were displayed. I clicked on the one I set up for the account and everything worked fine.

How to actually configure debugging in CFBuilder

I have ColdFusion Builder 2.0.0 installed and I am trying to look at the much vaunted step debugging. However, I cannot seem to get it to work as I don't have my site / JRun install setup in the naive way the examples show.
I am using version 9,0,1,274733 of ColdFusion and my configuration is as follows:-
Installed as multi-server version with Jrun here:- c:\Apps\JRun4
application files are here:- d:\websites\my.website.com
web root is here d:\websites\my.website.com\www
core library of CFCs is here d:\websites\frameworks\core which is mapped in CF as core
I have read this watched this http://help.adobe.com/en_US/ColdFusionBuilder/Using/WS0ef8c004658c1089-31c11ef1121cdfd6aa0-7fff.html and this http://forta.com/blog/index.cfm/2007/5/30/CF8-Debugger-Getting-Started and watched this https://experts.adobeconnect.com/_a204547676/p33029638/?launcher=false&fcsContent=true&pbMode=normal but I get stuck at the point after you have configured RDS and you are setting up the server for your project.
Now I am pretty sure the above is correct, when I move to the next page in the wizard I get the following:-
Now I as I understand it my Server Home should be c:\Apps\JRun4 and my Document root should be d:\websites\my.website.com
This all looks like it is going to be fine until you actually try and debug when I get
followed by
I can confirm that the server is running and RDS is enabled as in the RDS Dataview I can see all my databases.
Any help would be gratefully received as this is very frustrating and the documentation is very lacking.
There is a video tutorial as well that you may want to check and see if that helps. http://blogs.adobe.com/anand/2011/01/learn-how-to-debug-coldfusion-applications-using-coldfusion-builder-2.html
You need to specify the RDS username/password and the "application server name". If you are using the base instance that was installed when you setup the multiserver install of CF that is "cfusion", otherwise its the name of the instance you are using.
The RDS username is most likely "admin" unless you setup custom users for RDS. The password is the RDS password you specified when you installed CF.

Resources