I just started using AWS elastic beanstalk to host a web app I wanted to make. However, after following the instructions twice start to finish I get the same end result. Status shows everything is fine, but I keep getting this message:
The status is fine:
And I can view my app on localhost it just doesn't seem to work on beanstalk...
When I first ran eb init these are the settings I chose:
1) US East (Virginia)
2) 64bit Amazon Linux running Ruby 1.9.3
3) No DB instance for now.
Has anyone experienced this problem? What could possibly causing my app to not want to work on beanstalk?
After waiting a couple of hours it finally loaded my index page. I guess it just take a while for my pushed changes to show up.
Related
I have a meteor app running on heroku and until last week the database was running on mlab.
Then I switched to MongoDB Atlas and after a few days the application was running very slow.
I upgraded from M2 to M5, so it was ok, but now it is very slow again.
It seems there is a network out limitation, but with mlab there wasn't.
Could it be a problem with the queries or what am I doing wrong, what do I have to consider?
Does anybody know about this issue or have experience with meteor/heroku/mongodb-atlas combination?
Thanks in advance
In Heroku, when you picked up the MLab service, the DB was provided most probably in the same VPC with your Meteor instance. I'd first make sure I run an Atlas MongoDB in the same region and service provider as Heroku. (e.g. both Meteor and Mongo run on AWS eu-central). Did you do this? https://www.mongodb.com/blog/post/integrating-mongodb-atlas-with-heroku-private-spaces
Do you exceed the limitations for your Mongo cluster? https://docs.atlas.mongodb.com/reference/atlas-limits/ This is important to avoid paying for a service scale that you don't need.
Monti APM (https://montiapm.com/) has a free monitoring service for Meteor for 8 hours retention. That can help you with understanding your Oplog transactions and volume.
I don't know how you set up your Oplog but you may also try this (older) Mongo URI. I still use this with the latest Meteor version and I am fine with it:
"env": {
"MONGO_URL": "mongodb://yourapp:XXXXXXXXXXXXXXXX#yourapp-shard-00-00-zc1lg.mongodb.net:27017,yourapp-shard-00-01-zc1lg.mongodb.net:27017,yourapp-shard-00-02-zc1lg.mongodb.net:27017/meteor?ssl=true&replicaSet=yourapp-shard-0&authSource=admin",
"MONGO_OPLOG_URL": "mongodb://yourapp-oplog:XXXXXXXXXXXXXXXX#yourapp-shard-00-00-zc1lg.mongodb.net:27017,yourapp-shard-00-01-zc1lg.mongodb.net:27017,yourapp-shard-00-02-zc1lg.mongodb.net:27017/local?authSource=admin&ssl=true&replicaSet=Yourapp-shard-0"
}
I just created my first heroku app and pushed my code through it onto heroku. While testing it however, it showed that one of the templates didnot exist when it infact does when I test it from my laptop directly on the local server. Please do guide if you have any ideas!(ps.: I am using Windows hence please do keep take that into consideration when helping out!)
I am having trouble starting the processes/queues for a job server deployed to Google App Engine. In the Horizon dashboard, the instance names are visible, but no processes show and jobs do not execute.
While running the code on my localhost, processes/queues do start and execute jobs. I confirmed that the horizon.php config is correct and matches my APP_ENV, yet still, no processes start.
Any guidance is appreciated!
Horizon opens and closes php processes with the proc_open and proc_close functions which are on the list of permanently disabled functions in Google App Engine. After adding these to the whitelist_functions configuration under the runtime_config in the app.yaml everything works great.
This is my setup;
Bitbucket Repo of HTML docs.
Elastic Beanstalk Environment
EC2 c3 Instance (8GB Elastic Block Store attached)
So I connect Deploybot to Elastic Beanstalk successfully and deploy. Path is default.
Success, so it seems.
output Creating application archive for Elastic Beanstalk.
output Application archive created.
output Uploading application archive to S3 bucket deploybot-elastic-beanstalk-> mysite-997c5d66.
output Application archive uploaded.
output Creating application version mysite-fbba70de-5e4736.
output Application version created.
output Updating environment e-txarhpt4wp with new version.
output Environment was updated and will now be refreshing.
But no... where are the files?
I drop in with filezilla (SFTP) and cannot find anywere on the server.
Moreover, my path is actually;
var/www/vhosts/my.site.yay/html/
If I change the path in the Deploybot environment settings - the repo never successfuly deploys, instead all I get is 'bypassed' with every single git push which indicates to me that deploybot is not acutally connecting to anything and thus constantly sees 'no changes'.
Anyone got any clues?
I have spent several hours searching prior to this post and there is almost nothing written about using deploybot with aws besides official deploybot documents.
thanks in advance to those with potential answers
Doh!
I had set up my ec2 instance to use Nginx instead of Apache - deleting Apache (httpd).
In the process of writing this question I looked more closely at the deploybot log and traced that Deploybot pushs a .zip to a S3 bucket which it creates and then triggers an elastic beanstalk environment refresh - using whatever built in webhook there must be.
So whatever webhook that uses looks for Apache (httpd) and the whole thing fails, as revealed in the Beanstalk environment event logs;
ERROR
[Instance: i-0dc350d4f82711400] Command failed on instance. Return code: 1 Output: (TRUNCATED)...tory # rb_sysopen - /etc/httpd/conf/httpd.conf (Errno::ENOENT) from /opt/elasticbeanstalk/support/php_apache_env:81:in update_apache_env_vars' from /opt/elasticbeanstalk/support/php_apache_env:125:in' Using configuration value for DocumentRoot:. Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/05_configure_php.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
So, I switched my deploybot to SFTP with public key and it works. (I'm not clever enough to edit/write my own beanstalk envior refresh webhook yet)
I've written code that allows users to search for specific images through the Google Image Search API and then downloads those images using Carrierwave's remote image functionality. We're getting bug reports though that certain URL's are throwing 403 Forbidden errors and we traced it back to
Kernel.open(url)
Scanning existing issues got me to "'open_http': 403 Forbidden (OpenURI::HTTPError) for the string “Steve_Jobs” but not for any other string" which suggests that the problem is due to the missing User-Agent and so we added this to our call.
Kernel.open(url, 'User-Agent' => "Ruby/#{RUBY_VERSION}")
This resolved the issue in our dev environment but it had no effect at all in our production environment. This is the most frustrating part. My production environment (running on AWS EC2 Ubuntu 12.04) fails everytime and for far more URL's than my dev environment (OSX 9.5). Both environments are running ruby 2.0.0-p353 and rails 4.0.5.
We've isolated several test URL's that we can consistently re-created this problem.
Example: http://www.lowes.com/images/LCI/Planning/HowTos/ht_BuildaHomePlayground_kit.jpg
I'm running out of ideas, but it seems to be something specific to the AWS box (since it works in dev) so is it possible that AWS is using some sort of outbound filter/proxy or that Ubuntu 12.04 has a known issue with OpenURI?
Scouring the internet is starting to run out of options.
UPDATE
I have two AWS instances running, that were supposed to be identical to each other, but upon closer examination, one is running GNU/Linux 3.2.0-58-virtual and the code above works properly (thats my staging environment), and the other is running GNU/Linux 3.2.0-68-virtual and the code above fails (thats my production environment). So the issue would seem to lie in whatever changed between 58 and 68.
For now, I'm switching my production and staging environments so that the issue is resolved, though it feels like a temporary and invalid fix since the staging environment is likely to be upgraded at some point thus landing us back at square one.