Over the last few months, my drupal sessions table has ballooned to several GB. It seems to have started when I upgraded to drupal 5.20 (previously I thought drupal automatically cleaned out old sessions). So I created a cron job to delete sessions older than two weeks, but this takes far too long to execute (the sessions table grows by about a million rows per week). Should drupal actually be handling this, or do I just need to cut down the maximum session age until the execution time is acceptable?
Also, I thought drupal was not supposed to create a session on the first request, thus eliminating many garbage entries for crawlers. But at least a quarter of the session entries are bots.
Came upon this when I researched the issue again.
This is caused probably due to: stock PHP configuration in some linux distros means no PHP session garbage collection runs. So Drupal session cleaning function that's supposed to clean old session from DB never runs... .
See all about it here: http://www.rymland.org/en/blogs/boaz/2_jan_09/making-php-session-expire-drupal-and-general
It sounds like a bug in your code somewhere. Drupal shouldn't create a session on first request for that exact reason.
Drupal updates are only bugfixes/security fixes for Drupal 6 and lower. So I don't see why upgrading could have caused the problem.
Have you altered Drupal core in any way?
Related
My queue jobs all run fairly seamlessy in our production server, but about every 2 - 3 months I start getting a lot of timeout exceeded/too many attempts exceptions.
Our app is running with event sourcing and many events are queued so neededless to say we have a lot of jobs passing through the system (100 - 200k per day generally).
I have not found the root cause of the issues yet, but a simple re-deploy through Laravel Envoyer fixes the issue. This is most likely due to the cache:clear command being run.
Currently, the cache is handled by Redis and is on the same server as the app. I was considering moving the cache to its own server/instance but this still does not help me with the root cause.
Does anyone have any ideas what might be going on here and how I can diagnose/fix it? I am guessing the cache is just getting overloaded/running out of space/leaking etc. over time but not really sure where to go from here.
Check :
The version of your redis make an update of the predis package
The version of your Laravel
Your server
I hope I gave you some solutions
I did some research on this one and seems to be an issue for some users.
Noticed that Laravel logs me out automatically and intermittently. It's quite hard to replicate but it happened twice in a demo/presentation which has impact as you understand.
I can imagine that sessions can be suspect nr1 for this one but whatever I tried didn't seem to work.
How did you overcome this issue?
https://github.com/laravel/framework/issues/7549
On rare occasions, the session file can become corrupted if one copy reads in a half written out file. This condition, however, is difficult to reproduces.
Laravel team member #GrahamCampbell:
This is known limitation of the file based session driver.
Using a different session driver should do the trick. (My preference tends to be Redis, but the database driver may be a bit easier to set up for a demo.)
We have Magento EE 1.14. Admin was working fine till last two days its speed dropped dramatically. Frontend is not affected. Also no changes in code or server configuration. here is my attempt to fix the problem but nothing worked:
Log cleaning is properly configured
removed two unused extensions. but no improvement
tried to disable non-critical extensions to see if speed will improve but also not luck.
I can NOT use REDIS cache at this time. but configured new server which is using REDIS cache and move to it next month.
sometimes backend will gain speed for few minutes
I enabled profilers the source of the delay is mage ( screenshot attached ).
here are my question:
Is there anyway to know the exact reason for Mage delay ?
do I have other test i can use to identify the cause of delay ?
Thanks in advance,
It could be delay on external resources connection. Do you have new relic or similar software? Check there for slow connections. If You don't have NR, profile admin by blackfire.io. Magento profiler is really unhelpful :)
Follow below steps:
Delete unused extensions
It is best to remove unused extensions rather than just disabling them. If you disable an extension, it will still exist in the database. It would not only increase the size of your database (DB) but it also adds to the reading time for DB. So, keep your approach clear: If you don’t need it, DELETE it.
Keep your store clean by deleting unused and outdated products
One should keep in mind that a clean store is a fast store. We can operationalize the front-end faster by caching and displaying only a limited set of products even if we have more than 10,000 items in the back-end, but we cannot escape their wrath. If the number of products keeps on increasing at the backend, it may get slower, so it is best to remove unused products. Repeat this activity in every few months to keep the store fast.
Reindexing
One of the basic reasons why website administrators experience slow performance while saving a product is because of reindexing. Whenever you save a product, the Magento backend starts to reindex, and since you have a lot of products, it will take some time to complete. This causes unnecessary delays.
Clear the Cache
Cache is very important as far as any web application is concerned so that a web server does not have to process the same request again and again.
I have a report that uses a shared dataset. It also has several different slicers for viewing the data. The dataset is very large, so I created a cache for it so it doesn't take an eternity to load every time the user clicks on a slicer. The cache is set to expire every morning at 3:30am and refresh at 4am. The report is going to be used by 15 different clients and my company has a separate database set up for each client. So there are 15 versions of the report, each with a different data source.
The problem I'm having is that the cache is not working consistently. One day, all the reports run off the morning cache, the next day only a few reports use the morning cache and the others pull the live data (which means it takes several minutes to load). I've gone in and cleared the cache for each client, and the next day everything works fine, but a couple days go by and it's back to inconsistent.
One thought I had was there may be multiple copies of the same cache being stored and the report doesn't know which to use, so it doesn't use any. This shouldn't happen because the cache is cleared a half hour before it is refreshed, but is this possible? I would think if there were multiple copies of a cache, the report would use the most recent.
Another idea I had was that because there are 15 reports caching with the same parameters at the same time, maybe this is confusing the report. I would think it would use the cache associated with its data source, but could this be happening? Should I add a parameter to the dataset that has the client name, so there is no confusion?
Any other thoughts on what could be causing this would be helpful, thanks.
I figured out what was happening. Since all the reports were trying to cache at the same time, not all the caches were being successfully saved to the report server. I staggered the cache times, and that fixed my problem
I have a Drupal 6 site that is frequently (about once a day) going down. The hosting provider is reporting that something in our site code is occupying all Apache threads but keeping them idle, making the server run out of threads to respond to new requests. A simple restart of Apache frees the threads and fixes the issue, though it reoccurs within a few hours or a day.
I have no idea how to troubleshoot this issue and have never come across PHP code doing this. Is there some kind of Apache settings change I can make to capture more information about what might be keeping a thread occupied but idle? What typical PHP routines can cause this behavior? I looked for code that connects to external resources, but didn't see any issues there.
Any hints for what to look at, capture more information, or PHP code that can cause this would be most useful.
With Drupal6 you could have the poormanscron module running sometimes, or even the classical cron (from crontab wget or whatever).
Then you could get one heavy cron operation putting your database under heavy stuff. Then if your database reponse time is becoming very slow every http request will become very slow (as for example sessions are in the database, and several hundreds queries are required for a drupal page). having all reqests slowing down may put all the avàilable php process in a 'occupied state'.
Restarting apache all current process are stoped. If you run the cron via wget and not via drush cron tasks are a nice thing to check (running cron via drush would make it run via php-cli' restarting apache would not kill the cron). You can try a module like elysia cron to get more details on cron tasks and maybe isolate the long ones (you have a report on tasks duration).
This effect (one request hurting bad the database, all requests slowing down, no more process available) could also be done by one bad piece of code coming from any of your installed modules. This would be harder to detect.
So I would ensure slow queries are tracked on MySQL (see my.cnf otinons), then analyse theses requests with tolls like mysqsla. The problem is that sometimes one query is so big that all query becames slow. Se use time of crash te detect the first ones. Use also tho MySQL option to track queries not using indexes.
Another way to get all apache process stalled on php operation with drupal is having a lock problem. Drupal is using is own lock implementation with MySQL. You could maybe add some watchdog (drupal internal debug messages) calls on theses files to try to detect locks problems.
Then you could also have sonme external http requests calls made by drupal. Calling external websites like facebook, google, some tiny url tools, or drupal.org module update things (which always try to find all modules, even the one you write). If the distant website is down or filtering your traffic you'll have problems (but the apache restart would not help you, so it may not be that).