So I have this unique issue
I have a laravel app that is like alexa and it uses several apis including dmoz, google, json to fetch data about domains.
It has a feature where admin can bulk upload the websites and starts the cron and it keeps updating websites by itself.
Now after reaching approx 1000 websites, the app simply stopped.
I have to use CHOWN -R user /path/to/directory again to get it working.
However after doing this my cron stopped working
I flushed the cronjob and cronb manager tables from the database and delete cron.lock file and then resubmit websites and then start the cron.
Now the cron seems to be working because the rows started to appear in cron manager and cronjob tables and also confirmed by the cron output log, however its not appearing on the website.
Following are my laravel logs.
http://pastebin.com/iYuFmD4p
any idea...???
Related
I've built an internal app that is used only for the organization. The app works fine - it has a form which users fill in and save. It works for few hours that the form is stored in the database, but after a few hours the database is seemingly purged and the app shows no data - as if it's been started for the first time. I checked the log, but couldn't find anything. What and where can I check to find out what's going on? Could it be the case that since I'm using free heroku, it restarts after 24 hours.
I have a Laravel application deployed on AWS Elastic Beanstalk with a classic load balancer. Somehow the user sessions expire at irregular times. Sometimes it expires right after logging in and most times few minutes after logging in. On some occasions to, it takes hours to expire. but on localhost, this doesn't happen.
I have configured my session duration in my Laravel application to 10hours and this works perfectly on localhost but somehow it doesn't work on AWS ELB.
I'm suspecting that AWS resets the app sessions a number of times within a day. If that's the case, how do I overcome this? If that's not the case, then what might be causing this?
I'm posting the answer here just so anyone runs into the same problem. What happens with AWS servers is that, They kind of redeploy your codes a couple of times a day and this clears all newly created files and uploaded files in your project. That's why you have to use a cloud storage if you want to store files and the same thing happens with sessions.
By default laravel saves sessions in a file and whenever AWS redeploy your code, it wipes all current session because it deletes the session file. The solution is store the sessions anywhere but the file. So i used my database to store sessions and cache. You can do that by
Going to the config/session.php and changing the driver to database
After run
php artisan session:table
php artisan migrate
These will create the sessions table in the database for you and that should fix the AWS problem. Just like #arun-a said in short. you can checkout the sessions docs for more info.
If you are using load balancer, you have to keep session as centralized to access over multiple servers. So use session driver as database instead of file and do related migration. Refer here.
I am having trouble starting the processes/queues for a job server deployed to Google App Engine. In the Horizon dashboard, the instance names are visible, but no processes show and jobs do not execute.
While running the code on my localhost, processes/queues do start and execute jobs. I confirmed that the horizon.php config is correct and matches my APP_ENV, yet still, no processes start.
Any guidance is appreciated!
Horizon opens and closes php processes with the proc_open and proc_close functions which are on the list of permanently disabled functions in Google App Engine. After adding these to the whitelist_functions configuration under the runtime_config in the app.yaml everything works great.
I am not able to backup laravel automatically. I have tried
https://github.com/spatie/laravel-backup but couldn't get the job done
https://github.com/spatie/laravel-backup/issues/617
Any other ways of getting automated laravel backups?
I want to create a user in Heroku and want to give specific permission to this user to certain folder.
I've logged into heroku bash but I'm not able to create a user. It's giving me permission denied error. sudo also not working. I can't install anything in it.
Organisation admin user also not able to create a user.
Heroku will not allow you to do that.
Running heroku run bash is not the same as connecting to an SSH server.
When you build a new version of your application, Heroku will create a new container (much like Docker. It's LXC). Any instance of your application will run that container.
When you run a bash instance, a new instance of that container is created. You are not running on the same server as your app serves requests on.
That means the only moment when disk changes can be performed is at build time. So even if you could create users in a bash instance, those wouldn't be persisted accross instances.
Heroku will not let you create new linux users at build time anyway.
The only solution to access your app's code in a bash session is to run a one-off dyno. If you need to script that, you can use the platform api to boot a new dyno.
As for adding access, you can use the access:add command (also available as an api endpoint).
All users will be able to access all of your code though. You cannot restrict per folder.