I have a server at work based on Strapi v3.6.3. Over time, I am seeing that the server startup (npm run develop) time has gone up from <1 minute to ~4 minutes at present. I want to investigate this more to see what task(s) are performed during Strapi startup and how much time do they consume.
How can I go about this analysis? Unfortunately, I'm unable to find helpful documentation to help with this.
Is there a logging system where Strapi maintains startup logs that I could switch on to check further?
I've tried setting STRAPI_LOG_LEVEL to trace (as documented in https://strapi.gitee.io/documentation/3.0.0-beta.x/concepts/logging.html) and bootup strapi but it is not helping.
Related
My queue jobs all run fairly seamlessy in our production server, but about every 2 - 3 months I start getting a lot of timeout exceeded/too many attempts exceptions.
Our app is running with event sourcing and many events are queued so neededless to say we have a lot of jobs passing through the system (100 - 200k per day generally).
I have not found the root cause of the issues yet, but a simple re-deploy through Laravel Envoyer fixes the issue. This is most likely due to the cache:clear command being run.
Currently, the cache is handled by Redis and is on the same server as the app. I was considering moving the cache to its own server/instance but this still does not help me with the root cause.
Does anyone have any ideas what might be going on here and how I can diagnose/fix it? I am guessing the cache is just getting overloaded/running out of space/leaking etc. over time but not really sure where to go from here.
Check :
The version of your redis make an update of the predis package
The version of your Laravel
Your server
I hope I gave you some solutions
I can see the logs for a heroku app with heroku logs -t
Is there a way to easily see how many hits an app has received in, say, the past 24 hours? (preferably using a quick command in the CLI, but otherwise through the heroku webisite)
There are two ways I see here
the Heroku dashboard provides you with a Metrics tab, where you can see the throughput of your application.
If this is not exact enough, you can add a logging addon (logentries for example), then then analyze the router-logs there. Logentries provides you with counting, grouping, etc.
Same as a logging addon, but you also can add your own log drain and then analyze them yourself :)
Disclaimer: Please tell me if the question is too broad, and I will do my best to narrow it down.
We have an Heroku app which is running 2 web 1X dynos. This infrastructure has been running for the last 9 months.
However, in the last few weeks, we had several episodes where the app would see its response times skyrocketing for about an hour, before returning to normal without us doing anything about it.
On the pictures below, you can find an extract of Heroku Metrics during one of these "episodes", which happened yesterday afternoon.
As you can see, response time is going up and eventually, almost any request made to the server gets a timeout. During the event, it was barely possible to even load the home page of our website, hosted on this app. Most of the times, we would get the "Application Error" Heroku page.
What I see is:
The amount of requests to the server (failed or not) was not crazily high (less than 1000 every 10 minutes). For this reason, I think a DDOS attack is out of the picture.
Everything that is shown by Heroku Logs is that the failed request get a 503 (Service Unavailable) error, which would make me think about an overload.
The dynos do not seem overloaded. The memory usage is low, and the dyno load is reasonable, nothing unusual.
Heroku reported no issue during our crash event, as https://status.heroku.com/ states (last incident was on the 1st of July).
Restarting the dynos through several methods (from the interface, a command line or triggering an automatic deployment via our Gitlab repository) had no effect.
I am quite unsure as to how to interpretate these metrics, and what would be the solution to ensure this kind of episode does not happen again.
So my question is: where should I look? Is there some kind of documentation about how to investigate crashes on Heroku apps?
I have deployed a simple Spring boot app in Google App Engine Flexible. The app. has two APIs, one to add the user data into the DB (xxx.appspot.com/add) the other to get all the user data from the DB (xxx.appspot.com/all).
I wanted to see how GAE scales for the load, hence used JMeter to create a load with 100 user concurrency ramped up in 10 seconds and calls these two APIs in half a second delay, forever. While it runs fine for sometime (with just one instance), it starts to fail after 30 seconds or so with a "java.net.SocketException" or "The server responded with a status of 502".
After this error, when I try to access the same API from the browser, it displays,
Error: Server Error
The server encountered a temporary error and could not complete your
request. Please try again in 30 seconds.
The service is back to normal after 30 mins or so, and whenever the load test happens it repeats the same behavior as mentioned above. I expect GAE to auto-scale based on the load coming in to handle it without any down time (using multiple instances), instead it just crashes or blocks the service (without any information in the log). My app.yaml configuration is,
runtime: java
env: flex
service: hello-service
automatic_scaling:
min_num_instances: 1
max_num_instances: 10
I am a bit stuck with this one, Any help would be greatly appreciated. Thanks in advance.
The solution was to increase the resource configuration, details below.
Given that I did not set a resource parameter, it defaulted to the pre-defined values for both CPU and Memory. In this case, the default
memory was set at 0.6GB. App Engine Flex instances uses about 0.4GB
for overhead processes. Given Java is known to consume higher memory, there is a
great likelihood that the overhead processes consumed more than the
approximate 0.4GB value. Now instances in App Engine are restarted due
to a variety of reasons including optimization due to memory use. This
explains why your instances went off and it shows Tomcat is starting
up (they got restarted) and ends up in 502 error due to the nginx is
not able to complete the request. Fixing the above may lessen if not completely eliminate the 502s.
After I have specified the resources attribute and increased the configuration in app.yaml 502 error seems to be gone.
I have a Drupal 6 site that is frequently (about once a day) going down. The hosting provider is reporting that something in our site code is occupying all Apache threads but keeping them idle, making the server run out of threads to respond to new requests. A simple restart of Apache frees the threads and fixes the issue, though it reoccurs within a few hours or a day.
I have no idea how to troubleshoot this issue and have never come across PHP code doing this. Is there some kind of Apache settings change I can make to capture more information about what might be keeping a thread occupied but idle? What typical PHP routines can cause this behavior? I looked for code that connects to external resources, but didn't see any issues there.
Any hints for what to look at, capture more information, or PHP code that can cause this would be most useful.
With Drupal6 you could have the poormanscron module running sometimes, or even the classical cron (from crontab wget or whatever).
Then you could get one heavy cron operation putting your database under heavy stuff. Then if your database reponse time is becoming very slow every http request will become very slow (as for example sessions are in the database, and several hundreds queries are required for a drupal page). having all reqests slowing down may put all the avĂ ilable php process in a 'occupied state'.
Restarting apache all current process are stoped. If you run the cron via wget and not via drush cron tasks are a nice thing to check (running cron via drush would make it run via php-cli' restarting apache would not kill the cron). You can try a module like elysia cron to get more details on cron tasks and maybe isolate the long ones (you have a report on tasks duration).
This effect (one request hurting bad the database, all requests slowing down, no more process available) could also be done by one bad piece of code coming from any of your installed modules. This would be harder to detect.
So I would ensure slow queries are tracked on MySQL (see my.cnf otinons), then analyse theses requests with tolls like mysqsla. The problem is that sometimes one query is so big that all query becames slow. Se use time of crash te detect the first ones. Use also tho MySQL option to track queries not using indexes.
Another way to get all apache process stalled on php operation with drupal is having a lock problem. Drupal is using is own lock implementation with MySQL. You could maybe add some watchdog (drupal internal debug messages) calls on theses files to try to detect locks problems.
Then you could also have sonme external http requests calls made by drupal. Calling external websites like facebook, google, some tiny url tools, or drupal.org module update things (which always try to find all modules, even the one you write). If the distant website is down or filtering your traffic you'll have problems (but the apache restart would not help you, so it may not be that).