I was wondering if someone could help me, I've recently added the ability to upload images in my rails application using carrierwave, fog, and S3 for storage.
The application is running on Ruby-1.9.3-p194 and Rails 3.2.11 and in development the application is working fine I can upload images all day long, however in production I seem to be getting an intermittent "Excon::Errors::SocketError: Broken pipe (Errno::EPIPE)" and I say intermittent because I've managed to successfully upload a couple of images in production but I get this error more often.
I've spent some time looking into it but at present I am at a loss as to what is causing this.
So after doing some further digging it appears that it may have been because the region was incorrectly set in my config, I've run a test and all seems to be working correctly again.
Related
I've deployed Strapi on Heroku and have set up the content fine. When I uploaded images and videos to Strapi using the cms interface and saved the update. it saved successfully but the file url returns 404. has anyone experienced this before? Am I missing something?
Thanks guys.
https://strapi.io/documentation/3.0.0-beta.x/guides/deployment.html#file-uploads
File Uploads
Like with project updates on Heroku, the file system doesn't support local uploading of files as they will be wiped when Heroku "Cycles" the dyno. This type of file system is called ephemeral, which means the file system only lasts until the dyno is restarted (with Heroku this happens any time you redeploy or during their regular restart which can happen every few hours or every day).
Due to Heroku's filesystem you will need to use an upload provider such as AWS S3, Cloudinary, or Rackspace. You can view the documentation for installing providers here and you can see a list of providers from both Strapi and the community on npmjs.com.
I am facing a critical issue in my application, it is developed in Laravel and Angular. The issue is I am getting the old email templates on live site and on local server I am getting the latest updated one. The process to deloy the code is automatic, I just to commit the code in BitBucket and after that Bitbucket Pipleline push the code to AWS server directly.
I have already run the cache cammands for Laravel and restarted the jobs but still i am getting the same issue. If anyone have expirienced the same issue or have knowledge of the same to resolve, Please guide!
I think you can try one of the following ways to overcome the issue, I faced a similar issue and resolved it by following ways -
Try deleting the cache files manually from Laravel from storage/framework/views
Upload the code directly into AWS for particular module without using the pipeline way
restart your server
This will surely resolve your issue!
Since you are using Laravel and angular application deployed on AWS,
I assume that bit bucket is pushing code and build commands are fired on every push
there are few things which can help you.
Try to build the angular side on every push, since angular builds hashes all the files in the dist folder
Try to delete the Laravel cached files which are stored in storage/framework/views
Check that on that your server is pointing to the right project folder
If any of the points from 1 or 2 works you can automate the process by passing CLI command after every push,
Point 1 and 2 are achievable by passing CLI commands.
I've written code that allows users to search for specific images through the Google Image Search API and then downloads those images using Carrierwave's remote image functionality. We're getting bug reports though that certain URL's are throwing 403 Forbidden errors and we traced it back to
Kernel.open(url)
Scanning existing issues got me to "'open_http': 403 Forbidden (OpenURI::HTTPError) for the string “Steve_Jobs” but not for any other string" which suggests that the problem is due to the missing User-Agent and so we added this to our call.
Kernel.open(url, 'User-Agent' => "Ruby/#{RUBY_VERSION}")
This resolved the issue in our dev environment but it had no effect at all in our production environment. This is the most frustrating part. My production environment (running on AWS EC2 Ubuntu 12.04) fails everytime and for far more URL's than my dev environment (OSX 9.5). Both environments are running ruby 2.0.0-p353 and rails 4.0.5.
We've isolated several test URL's that we can consistently re-created this problem.
Example: http://www.lowes.com/images/LCI/Planning/HowTos/ht_BuildaHomePlayground_kit.jpg
I'm running out of ideas, but it seems to be something specific to the AWS box (since it works in dev) so is it possible that AWS is using some sort of outbound filter/proxy or that Ubuntu 12.04 has a known issue with OpenURI?
Scouring the internet is starting to run out of options.
UPDATE
I have two AWS instances running, that were supposed to be identical to each other, but upon closer examination, one is running GNU/Linux 3.2.0-58-virtual and the code above works properly (thats my staging environment), and the other is running GNU/Linux 3.2.0-68-virtual and the code above fails (thats my production environment). So the issue would seem to lie in whatever changed between 58 and 68.
For now, I'm switching my production and staging environments so that the issue is resolved, though it feels like a temporary and invalid fix since the staging environment is likely to be upgraded at some point thus landing us back at square one.
I just started using AWS elastic beanstalk to host a web app I wanted to make. However, after following the instructions twice start to finish I get the same end result. Status shows everything is fine, but I keep getting this message:
The status is fine:
And I can view my app on localhost it just doesn't seem to work on beanstalk...
When I first ran eb init these are the settings I chose:
1) US East (Virginia)
2) 64bit Amazon Linux running Ruby 1.9.3
3) No DB instance for now.
Has anyone experienced this problem? What could possibly causing my app to not want to work on beanstalk?
After waiting a couple of hours it finally loaded my index page. I guess it just take a while for my pushed changes to show up.
The site I'm building using laravel 4 would randomly returns a 500 error on staging server either with the error page shown that a required file is missing or a blank page. No page of the site would load after the first time this error is noticed. Restarting apache fixes the issue. No file or database changes are made at the time the issue starts or during the apache restart. Clearing cache with artisan doesn't help. The staging server has 512MB RAM and 20GB of disc space. This started happening last week and is extremely hard to replicate or watch error logs for since it seems to be happening randomly every few days.
I would think that this issue should have nothing to do with database, assets or disc read because only restarting apache helps.
Are there any known issues with laravel or any of the vendors? Does anyone know a fix for this? All help is appreciated!
Empty your vendor folder, run composer update/install and restart your server.