I deployed an application on Heroku. I'm using the free service.
Quite frequently, I get the following error.
PG::Error: ERROR: out of memory
If I refresh the browser, it's ok. But then, it happens again randomly.
Why does this happen?
Thanks.
Sam Kong
If you experience these when running queries, your queries are complicated or inefficient. The free tier has no cache, so you're already out there.
If you're getting these errors otherwise, open a support ticket at https://help.heroku.com
heroku restart simply helped me though
If you are not in a free tier, its maybe because you are using too much memory connecting to PG.
Consider an app running on several dynos, with several processes, each with lots of threads, maybe you are filling up the pool.
Also, as it appears in Heroku's Help Center maybe you are caching too many statements that wont be used.
Related
I'm a bit confused by a problem that has only become more apparently lately and I'm hoping that someone might be able to point me in the direction of either where I might look for appropriate settings, or if I am running into another problem they have come across before.
I have a Laravel application and a private server that I use for our little museum. Now as the application has become more complex, the lag is noticeable and you can see how it almost lines up the connections, finishing one request before moving along to the next, whether it be api, ajax, view responses, whatever.
I am running Apache 2.4.29 and my Ubuntu Server is 18.04.1.
I have been looking around but not much has helped, in regards to connections settings, if I look at my phpinfo() I see this Max Requests Per Child: 0 - Keep Alive: on - Max Per Connection: 100 but I believe these are just fine the way they are.
If I check my memory I think it says I have 65 GB of available memory, with 5 being used in caching. When reviewing the live data, the memory never crosses into the GB territory and solely remains in the MB territory. This server is absolutely only used for this Laravel project, so I don't have to worry about messing with other projects, I'd just like to make sure this application is getting the best use it can for its purpose.
I'd appreciate any suggestions, I know there's a chance the terms I am searching for are incorrect, or maybe just outdated, so if there are any potential useful resources out there, I'd appreciate those as well.
Thank you so much!
It's really hard to be able to tell since there a lot of details lacking but here some things that can give you a direction of where to look:
Try downloading htop via apt-get and see what happens on your CPU/RAM load with each request to the server.
Do you use php-fpm to manage the php requests? This might help in finding out if the problem lies in your PHP code or in apache configuration
Did you try deploying to a different server? Do you still see the lagging on the other server as well? If not, this indicates a misconfiguration problem and not an issue with your code.
Do you have other processes that are running in the background and might slow things down? Cron? Laravel Queue?
If you try to install another app on the server (let's say phpmyadmin) is it slow as well or it works fine?
Try to take it from here. Best of luck.
Heroku is imposing 300MB limit on slugsize. Normally, this should be way more than enough for most of the web apps. However, our company uses libraries that are frequently 50MB or more each, and there are a lot of those.
Is there anyway to increase the slug size limit on Heroku? Has anyone had any success with overcoming this limit?
Reading a bit from the documentation, it seems that it is not something Heroku does. The limitation is related to their expectations of storage consumed per application and circumventing it may result in malfunctioning services. In case they consider it a contract breach, they may take down your service. So I would not advise trying really hard. The documentation tells that smaller slugs will deploy faster and your question is directly against it.
I'm trying to move a Magento 1.7 site to a WebFaction 512MB plan. Currently it's on a several-GB Linode (and it absolutely rocks), but we have to move it onto our own server now and I'm having trouble getting it to perform well (typical page load is anywhere from 45s to several minutes, often timing out at 5 mins).
As mentioned in the title, I'm running Nginx with fastcgi_pass to the PHP-FPM socket (php 5.5.0, w/zend opcode). FWIW, I've already moved our Wordpress site to this server, and it's performing great under basically the same setup. I've also got a similar setup running on my local VM, similar PHP settings, and it doesn't have any trouble delivering a page in 3-5s. I've done lots of profiling with XDebug, and I'm still at a loss - it says that about 90% of the time is spent in spl_autoload (handled by lib/Varien/Autoload), but I don't know if there's anything I can actually do about that. I've echoed get_include_path() and it doesn't include anything weird, so... I just don't know.
Here's some relevant config info, at pastebin:
Nginx config
php-fpm.conf
php.ini
I'm at my wits end, and am basically hoping for at the very least, a simple sanity check: Magento on Webfaction, 512MB, PHP Fastcgi - is that crazy? Not sure if it matters, but we've only got like 75 products. Let me know if there's other info that might help, I've got the php "slow logs", xdebug... yeah. I'm just unable to see the problem at this point, but I feel like I've got the tools to ferret it out, whatever it might be. Thanks in advance!
I'm afraid that this will come down to the underpowered environment. Correct me if I am wrong but your hosting is probably a VPS and sometimes, no matter how much optimisation you do - it's often easier to upgrade the hosting.
I'm at a loss why you would move from a VPS to a shared hosting provider like Webfaction. If you bought a dedicated webfaction server why are you limited to only 512mb?
The problem was not with my app or my nginx/php settings at all, it turns out the server my account is on was totally overloaded and has since been dealt with. My app now loads really fast, basically as you would expect.
I receive the dreaded "Too Many Open Files" error after about 5 minutes running my application. This is a showstopper for me. I do know there is a 256 open file limit. I ran lsof to track down if I have a leak. I found that many of the open handles are simply connections Tomcat and other processes must make. The "nginx" process seems to be the only one that fluctuates but it still only goes to a maximum around 81. My application does not seem to leak file descriptors. I absolutely love Cloud Foundry. It is the first PAAS that hasn't required I refactor my application to make things work. When is the file limit going to be raised? I use Micro Cloud Foundry for testing but I want to run on the Hosted Cloud Foundry as soon as possible. I get this error in both versions. Is there a way around this? I tried modifying the limit on the Micro Cloud instance but I get errors saying I do not have the rights to make that kind of change. Any help or suggestions on this?
The new file descriptor quota is in the following database migration:
https://github.com/cloudfoundry/cloud_controller_ng/blob/master/db/migrations/20130131184954_new_initial_schema.rb
Line 185.
This particular setting will not take effect on our http://cloudfoundry.com until the April time frame when we emerge from our beta status and our "Next Generation" components are in production.
If you run your own version of Cloud Foundry, you could run this migration, assuming you are using cloud_controller_ng.
The Micro Cloud Foundry we are using internally for development does have the new cloud controller in it. You can read how we get that running for our own purposes here:
We are in a bit of a transition period as we deprecate the legacy bits and move towards these NG components. Apologies for the hitches, but they will be worth the cost. Thanks for your patience.
Best,
Matt Reider
Product Manager
Cloud Foundry
I logged in as the root user. I modified both the /var/vcap/packages/dea/dea/lib/dea/agent.rb file and the /etc/security/limits.conf file.
For the agent.rb file I followed the instructions here:
http://mdahlman.wordpress.com/2012/04/20/micro_cloud_foundry/
For the limits.conf file I followed the instructions here:
http://myadventuresincoding.wordpress.com/2010/10/09/ubuntu-increasing-the-maximum-number-of-open-files/
I am not sure which fix helped or if it required modifying both files. The application seems to be working now so I am moving on. If someone has a better solution I would be happy to hear it.
This only allows me to run my application on a self hosted micro cloud foundry. It would be much better if someone had a solution for the hosted version. Unfortunately I would have to find some way to limit my app to use under 256 file descriptors and that is not likely to happen.
I have a few sites which are exhibiting a slow load time. All are WordPress 3.5. All are hosted through BlueHost. All are developed by me (built as child-themes of existing WP themes).
Using Safari Developer tools, I see that they average 4–6 seconds (not ms) of latency before anything happens, which appears to be abnormally high. I've tried to wrap my head around latency, and I know I'm not the only one to ask about it here ... but I cannot figure out if the primary culprit is my hosting provider (Bluehost) or with my development.
Here are a couple of my sites with issues:
http://www.HubbardProductions.com
http://www.xla.com
Can anyone point me in the right direction? What can I do to reduce the latency?
you can see from here. your website is responding lately. http://i.imgur.com/VIVoq.png
http://tools.pingdom.com/fpt/#!/jyKI0Kv01/http://hubbardproductions.com/
Chris, same problem here. Also with Bluehost + Wordpress 3.5.
Some minutes ago, my sites even went down, and I was unable even to access cPanel. I received the following error:
Auth failed69.89.31.120:2083 is temporarily down.
I contacted the technical staff and they told me to try again, deleting cookies, and also sent me this url:
https://my.bluehost.com/cgi/help/481
Which, in my case, is of little help, but perhaps it can help you.
I asked them if there was any problem with the servers lately and they said nope, no issues.
So, to answer your question, I would:
Wait a few days, in case it is temporary (I hope).
If not, I would run some tests with simple html pages, then php, then php + simple SQL, etc., to find the bottleneck, and if it is a server issue or a wordpress issue.
If I find it is a server issue, I would complain.
If everything fails, I would move my sites to other hosting. Bye-bye Bluehost. :(
Good luck!