The site I'm building using laravel 4 would randomly returns a 500 error on staging server either with the error page shown that a required file is missing or a blank page. No page of the site would load after the first time this error is noticed. Restarting apache fixes the issue. No file or database changes are made at the time the issue starts or during the apache restart. Clearing cache with artisan doesn't help. The staging server has 512MB RAM and 20GB of disc space. This started happening last week and is extremely hard to replicate or watch error logs for since it seems to be happening randomly every few days.
I would think that this issue should have nothing to do with database, assets or disc read because only restarting apache helps.
Are there any known issues with laravel or any of the vendors? Does anyone know a fix for this? All help is appreciated!
Empty your vendor folder, run composer update/install and restart your server.
Related
I am facing a critical issue in my application, it is developed in Laravel and Angular. The issue is I am getting the old email templates on live site and on local server I am getting the latest updated one. The process to deloy the code is automatic, I just to commit the code in BitBucket and after that Bitbucket Pipleline push the code to AWS server directly.
I have already run the cache cammands for Laravel and restarted the jobs but still i am getting the same issue. If anyone have expirienced the same issue or have knowledge of the same to resolve, Please guide!
I think you can try one of the following ways to overcome the issue, I faced a similar issue and resolved it by following ways -
Try deleting the cache files manually from Laravel from storage/framework/views
Upload the code directly into AWS for particular module without using the pipeline way
restart your server
This will surely resolve your issue!
Since you are using Laravel and angular application deployed on AWS,
I assume that bit bucket is pushing code and build commands are fired on every push
there are few things which can help you.
Try to build the angular side on every push, since angular builds hashes all the files in the dist folder
Try to delete the Laravel cached files which are stored in storage/framework/views
Check that on that your server is pointing to the right project folder
If any of the points from 1 or 2 works you can automate the process by passing CLI command after every push,
Point 1 and 2 are achievable by passing CLI commands.
I have downloaded Magento in combination with DevBox to build my webshop locally. After installation with the terminal, I got the url to the shop and the admin (something like: http://127.0.0.1:32769/ and http://127.0.0.1:32769/admin/).
This all worked great! But then I stopped to continue some other time and stopped the working-container with the commandline.
When I wanted to continue, I started the working container and it now runs on another port. However, on this port only the HTML is loaded.
The system tries to download the rest of the files on the old port (:32769), which is obviously not working. Stuck here!
It's hard to say without seeing your system (not super familiar with the devbox), but regarding
The system tries to download the rest of the files on the old port (:32769), wich is obviously not working. Stuck here!
It sounds like the URLs generated in your source might be cached from the previous session. This means they have the old port number. If you clear your system's cache store, that should solve the problem.
I'm not 100% sure if the dev box ships with redis as a cache store, or the file system, varnish, or something else. It depends on the options you chose. The first thing I'd try is running Magento's cache clean command from the command line
php bin/magento cache:clean
The next thing I'd try is removing the var/cache/* folders. If you're using varnish or redis you'll need to research how to manually clear those caches. Hope that helps!
My problem is that I do not think I am able to refresh the magento redis cache from the admin page. I realize that the problem could come from many sources, but my gut tells me it has something to do with the backend being on a separate server. My magento installation is as follows:
Magento CE 1.8
Backend server and NFS(media) on an Amazon AWS EC2 at
http://admin.example.com
Database on AWS RDS MySQL 2 app servers
(scalable to more) on AWS Elastic Beanstalk at http://www.example.com
(route53)
regular backend cache(database 0), Lesti-FPC(database 0), and
redis_session (database 1) on AWS elasticache redis
I originally had my Lesti-FPC configured to use database 2 on the redis cache. I thought it worked pretty well as far as I could tell, until I realized that I couldn't flush the cache at all from the admin System>Cache Management page. "Flush Magento Cache," "Flush Cache Storage," "disable", and "refresh" did nothing. I could only flush it by rebooting the redis node or going in with redis-cli and using redis commands.
I then tried configuring Lesti-FPC to use database 0 as described above. It worked better. Now, I could flush the FPC with "Flush Cache Storage," although the other options still didn't work. At the time, I assumed it was an issue specifically with Lesti-FPC. But anyway, using "Flush Cache Storage" was good enough for me at the time, especially once I discovered that I could flush the cache through code using
Mage::app()->getCacheInstance()->flush();
I just recently found out that the problem may not be specific to Lesti-FPC. While trying to fix the Lesti issue, I tried monitoring redis. I know nothing about redis or caching, but when I would try to refresh the FPC, I would see commands like:
“del” “zc:ti:403_FPC”
“srem” “zc:tags” “403_FPC”
But those tags never existed. Doing:
keys *FPC*
in redis would give me
“zc:ti:109_FPC”
but nothing with 403. SO this means that my fpc caches do not get invalidated like they're supposed to after product/category changes and reindexing. I got around this by manually flushing the cache after changes and running cron jobs to flush and prime the fpc every few hours.
But it made me suspicious. I tried refreshing the other caches from the admin, and I found that magento would always try to delete and read the 403 keys(some of which existed and some of which did not) but never any 109 keys (of which there are many).
My guess is that the 403 keys are specific to the admin server, and the 109 keys are specific to the app servers. The admin server, maybe because it is on a different subdomain, is not touching the app servers' cached stuff. But the app servers are able to find its own keys fine, as demonstrated by the fact that the FPC is working very well.
Does this make sense? Is there something I could do to fix this? Did I configure something incorrectly or is this a magento bug?
It turns out that the Zend cache prefix is the first three characters of an md5 hash of the path to your etc folder.
My app server has its document root at /var/www/html. The full path of /var/www/html/app/etc gives an id of 403. The app servers running on elastic beanstalk have their document roots at /var/app/current which is done automatically at deployment.
It seems pretty dumb. Why not a hash of the database address and database name or something? That would make more sense.
Anyway, I hope this helps someone.
I built my site on my local machine on MAMP and I migrated the finished site to my remote server. I used Akeeba Backup and Kickstart and installed before making the final back up and then setting it up on the remote server.
However after doing so, I am no longer able to login to my admin. The passwords have not changed and the only thing I get is a yellow pop up message above the login box that says "Warning".
I reset my passwords several times but no success. The site works without a hitch so I don't think something went wrong on the migration but that wouldn't explain why I no longer can't log into my local instance.
I am a little perplexed. I am not sure if there is a bug with Akeeba Kickstart 3.9.
Most probably your remote server has the newer PHP version. The reason is that new PHP version generates the MD5 in a different way.
I also had the same issue once ... please try the following steps...
in you DB's Joomla user table
Backup the password'hash of your desired user.
Now go to this site and get the new MD5 hash of your password.
INSERT the new MD5 hash into DB.
Now try logging in again.
I found my issue even though my local MAMP was using PHP 5.3 and my remote server PHP 5.6 the actual problem was that for some reason the Joomla User plug in was disabled during the migration which caused me not being able to log in.
I went into my DB and set the flag from '0' to '1' and it resolved the issue. I am not sure how the parameter was changed but it was the culprit.
I was wondering if someone could help me, I've recently added the ability to upload images in my rails application using carrierwave, fog, and S3 for storage.
The application is running on Ruby-1.9.3-p194 and Rails 3.2.11 and in development the application is working fine I can upload images all day long, however in production I seem to be getting an intermittent "Excon::Errors::SocketError: Broken pipe (Errno::EPIPE)" and I say intermittent because I've managed to successfully upload a couple of images in production but I get this error more often.
I've spent some time looking into it but at present I am at a loss as to what is causing this.
So after doing some further digging it appears that it may have been because the region was incorrectly set in my config, I've run a test and all seems to be working correctly again.