I ran into strange Heroku behavior after I published my backend to Strapi on it.
From time to time (few times during the day), my whole backend data is completely cleared, deleting all posts that I made, as well as all users and permissions.
I have to re-register every time to get access and then manually add all the content once again.
Can you please tell me why is it so, and how to avoid it? I am making no commits during this time.
Related
I have built a new site for a customer and taken over managing their domain and using a new hosting. The previous site and hosting have been completely taken down.
I am running into a major issue that I am not sure how to fix. The previous developer used a service worker to cache and load the previous site. The problem is that users that had previous visited the site keep seeing the old one since it is all loading from a cache. This old site no longer even exists so I have no way of adding any javascript to remove the service worker from their browser unless they hit the new site.
Has anyone ever had this issue and know of a way to resolve it? Note, asking the users to delete the service worker from their browser won't work.
You can use cache busting to achieve the outcome. As per Keycdn
Cache busting solves the browser caching issue by using a unique file
version identifier to tell the browser that a new version of the file
is available. Therefore the browser doesn’t retrieve the old file from
cache but rather makes a request to the origin server for the new
file.
In case you want to update the service worker itself, you should know, for a service worker an update is triggered if any of the following happens:
A navigation to an in-scope page.
A functional events such as push and sync, unless there's been an update
check within the previous 24 hours.
Calling .register() only if the service worker URL has changed. However, you should avoid changing the worker URL.
Updating the service worker
Maybe using the clear-site-data header would be the most thorough solution.
We have Magento EE 1.14. Admin was working fine till last two days its speed dropped dramatically. Frontend is not affected. Also no changes in code or server configuration. here is my attempt to fix the problem but nothing worked:
Log cleaning is properly configured
removed two unused extensions. but no improvement
tried to disable non-critical extensions to see if speed will improve but also not luck.
I can NOT use REDIS cache at this time. but configured new server which is using REDIS cache and move to it next month.
sometimes backend will gain speed for few minutes
I enabled profilers the source of the delay is mage ( screenshot attached ).
here are my question:
Is there anyway to know the exact reason for Mage delay ?
do I have other test i can use to identify the cause of delay ?
Thanks in advance,
It could be delay on external resources connection. Do you have new relic or similar software? Check there for slow connections. If You don't have NR, profile admin by blackfire.io. Magento profiler is really unhelpful :)
Follow below steps:
Delete unused extensions
It is best to remove unused extensions rather than just disabling them. If you disable an extension, it will still exist in the database. It would not only increase the size of your database (DB) but it also adds to the reading time for DB. So, keep your approach clear: If you don’t need it, DELETE it.
Keep your store clean by deleting unused and outdated products
One should keep in mind that a clean store is a fast store. We can operationalize the front-end faster by caching and displaying only a limited set of products even if we have more than 10,000 items in the back-end, but we cannot escape their wrath. If the number of products keeps on increasing at the backend, it may get slower, so it is best to remove unused products. Repeat this activity in every few months to keep the store fast.
Reindexing
One of the basic reasons why website administrators experience slow performance while saving a product is because of reindexing. Whenever you save a product, the Magento backend starts to reindex, and since you have a lot of products, it will take some time to complete. This causes unnecessary delays.
Clear the Cache
Cache is very important as far as any web application is concerned so that a web server does not have to process the same request again and again.
I have a report that uses a shared dataset. It also has several different slicers for viewing the data. The dataset is very large, so I created a cache for it so it doesn't take an eternity to load every time the user clicks on a slicer. The cache is set to expire every morning at 3:30am and refresh at 4am. The report is going to be used by 15 different clients and my company has a separate database set up for each client. So there are 15 versions of the report, each with a different data source.
The problem I'm having is that the cache is not working consistently. One day, all the reports run off the morning cache, the next day only a few reports use the morning cache and the others pull the live data (which means it takes several minutes to load). I've gone in and cleared the cache for each client, and the next day everything works fine, but a couple days go by and it's back to inconsistent.
One thought I had was there may be multiple copies of the same cache being stored and the report doesn't know which to use, so it doesn't use any. This shouldn't happen because the cache is cleared a half hour before it is refreshed, but is this possible? I would think if there were multiple copies of a cache, the report would use the most recent.
Another idea I had was that because there are 15 reports caching with the same parameters at the same time, maybe this is confusing the report. I would think it would use the cache associated with its data source, but could this be happening? Should I add a parameter to the dataset that has the client name, so there is no confusion?
Any other thoughts on what could be causing this would be helpful, thanks.
I figured out what was happening. Since all the reports were trying to cache at the same time, not all the caches were being successfully saved to the report server. I staggered the cache times, and that fixed my problem
We have for some time now been experiencing problems with data being saved in our SQL database.
Sometimes records are saved with data that does not match the rest of the row, making it seem like at some point, data is being 'swapped' for something else, perhaps, another user's data, before being passed to the database.
We do use TransactionScopes throughout with Isolation Level of ReadCommitted which makes me think the data integrity issue lies within the application rather than at the Database level.
We do use the session extensively and we are starting to think that the times of the corrupt data are similar to the times we deploy updates to the system during the day.
We do use the aspnet_state service to persist the session over application restarts.
Our users rely on terminal sessions therefore multiple users all log into the same server and launch the system via a browser.
We have in the past noticed users logging in with the same domain credentials but we are now relatively confident that users now log in with unique accounts.
99.9% of the data is correct but we have been struggling to understand what could be causing this intermittent data integrity issue.
We are now limiting our deploys to outside working hours on pain of death, but this is not always possible.
Can anyone shed light on why/how this might be happening?
EDIT: We have now isolated this to the DAL layer, see SQL query returns incorrect value in multi user environment
I have recently been fighting this!, and had similar problem to yours around 95% of the data written back was correct. I looked at various reasons why, the main culprit was some users on the network had downloaded Chrome and opening the record within Chrome, breaking our session id's as Chrome ignores sessions.
The other cause had been either the users was not closing the browser or not logging off the application allowing either the same user or completely different user to pick and use the session id.
After introducing a browser check and then reject Chrome, educating the users to make sure they log off, doing any updates to outside busy periods the problem was just about gone.
I forgot to mention, also on your IIS its best to turn off caching in the Output Caching, for the user and kernal set to prevent caching.
I have a default cache that is fairly small and static. It contains just string keys and a string objects.
Since I won't be using anywhere near the allowed amount of memory, I'd like to just preload all of the objects into the cache on startup and have them never expire. I added a log message on start indicating that the cache was loaded.
Right now the project is still in development so the cache isn't being hit often (other than by web spiders/crawlers/scripts). The problem I'm seeing is that every hour to few hours, I'm seeing the log message that my cache was loaded. I'd expect it to load once and then not reload until I force it to.
Is there any way to keep the cache "alive" so that it doesn't have to frequently reload? Is it like an IIS worker process that dies out after some amount of inactivity?
FYI I have the cache configured for Expiry Policy: Never, Time: 0min, Eviction: Disabled. Also the way I check if the cache is still alive is that on load I add a special object to the cache. Then I check to see if that object exists and if it doesn't I assume the cache needs to be reloaded.
For anyone else who stumbles across this I ended up creating a scheduled task that hit the cache every 5 minutes. Since then, I haven't seemed to have any issues with it reloading. Not sure if this is the best answer, but it worked for me.