I am new to Laravel and keep on learning its capability and I need some assistance on the website I am working on... So, I have a website that is somewhat having a timeout problem when connecting to a 3rd party app during a certain period... It could be an issue with this 3rd party having a maintenance/ downtime, but is there any way I can check/ monitor the Laravel site itself for scheduled tasks running during those affected time?
Or could it also be monitored using Apache logs ?
I tried searching online but it appears I need to install some cron monitoring tools.
Laravel has Telescope, a very good debug tool. You can see scheduled tasks, requests, events etc ...
Related
So...i'm pretty new in hosting websites and horst yet with hosting in cloud services. I'm having a pretty bad time trying to host a Laravel 5.7 application in AWS and i really need some help/direction.
My situation is like this:
I have a normal Laravel 5.7 app that uses MySQL database and Laravel Queue(database type).
I need a process to constantly listen/monitor the job queue and execute them.
The thing is, i just lerned that you can't use normal hosting to do this(listen the job queue) and that i need a VPS service.
In my searches i saw that AWS have a 1 year free trial and i thought it was worth to take a look. The thing is, i started from EC2 and got as far as installing apache, mysql, php, git, etc. I made a clone of my project to the /etc/www/html and install composer dependencies... but i'm not quite understaing what to do for the app run, and in looking for tutorials everything is always differet and nothing ever works. I dont know what else to do, this is my first contact with cloud services.
That said, my question is what can i do to host my Laravel 5.7 app in amazon, and is it really the better solution for my problem ?
I am currently doing what you're trying to do. The configuration is outlined here. You will need to install supervisord. You didn't leave enough information about what type of instance you're running, so I can't get you more complete instructions, but if you follow these steps, your queue should run as expected.
I have made a notifications system in my laravel project.
in development mode it works perfectly, but on shared hosting it doesn't because i couldn't run queue:listen to proccess the queued notifications.
Right now i use 000webhost free account to test my application and i could not find a way to run "php artisan queue:listen", i tried to create a "cron job" but it seems i don't know how to implement it because the way to create cron job on 000webhost is slightly different from the others, samething with laravel taks scheduling.
I use Pusher-js and laravel-echo to broadcast the notifications.
I read a lot of topics but could not find a clear solution.
Thanks.
You need terminal access or some means of pushing a supervisor configuration or similar to the server, as well as restarting the service. I don't think you're going to have any luck on the free account though.
4) Users can access to server terminal (linux environment) via SSH,
which is not allowed for free accounts. If you upgrade to a premium
account, it is possible.
https://www.000webhost.com/forum/t/basic-question-mysql-with-php/28028/3
For my PHP Web App I am using the PHP Buildpack. Now I would like to schedule a Tasks that should be triggered every month. Normally I would use CronJobs for that.
How can I achieve that within the Swisscom Application Cloud?
Swisscom App Cloud is based on Open Source Cloud Foundry
Upstream Cloud Foundry doesn’t have a feature equivalent to cron jobs (task scheduler). Stay tuned, I guess this feature will be soon implemented, because lots of people migrating from Heroku to CF. Heroku offers a cron job feature. Subscribe to Swisscom App Cloud Newsletter to read announcements.
There are workarounds for scheduling tasks, see Scheduling tasks on Cloud Foundry on blog.pivotal.io for a Ruby/Rake based example. Sorry for PHP I didn't found example code. There is no elegant solution! You need to implement yourself some kind of workaround. Would be great if you publish your code to GitHub.
If you need cron jobs only in data store, for example MariaDB offers Events.
Events are named database objects containing SQL statements that are
to be executed at a later stage, either once off, or at regular
intervals.
They function very similarly to the Windows Task Scheduler or Unix
cron jobs.
We had a simular issue. As written by #Fyodor, there is no native solution in Cloud Foundry. We did some research and found vendors like https://www.iron.io/.
Finally, we ended up with a very simple solution.
We expose all our background jobs via an https interface.
As we anyhow use Jenkins for CI/CD and it has lots of scheduling capabilities, we use our existing Jenkins to trigger these jobs via a simple cURL call to the HTTP endpoints.
I'm developing a node.js application that basically stores user event logs in a database and shows insights about user actions.
For achieving this event logs must be analyzed by using a Mapreduce job which would run once a day automatically (every night).
I've found lots of tutorials about mapreduce on google cloud web site but I'm totally lost because there are several technologies and can't find a way to do it without using the command line and also there is no information about scheduling (I want that the whole analysis process to be entirely automated)
Please, could you provide me advice about what google technologies should I use or where I can find a good tutorial?
Thank you
You want to be looking at:
Dataproc (run Hadoop/Spark jobs out of the box)
Dataflow (develop 'pipelines' using the Dataflow/Beam programming model)
Sometimes when I access my windows azure website, the initial response time is very slow. After the first page load the website is fast. Some background: The website is not that often visited at the moment. Further, I am using a keepalivecontroller to keep the website running and the website is running in shared mode. I am wondering: are websites that are not that active removed from memory in windows azure? Or is it just that background tasks on the operational level of windows azure are interfering sometimes? It is not transparent for me what is happening, so is there some sla of something for windows azure websites?
There is now a new feature available for Windows Azure Websites in 'Reserved' mode that will keep your website warm. You can now turn on "Always-on" under the "Configuration"-tab on your Azure Website. As explained in this blog post:
When the new “Always On” feature is is enabled on a site, “Windows
Azure will automatically ping your website regularly to ensure that
the website is always active and in a warm/running state,” Guthrie
writes. “This is useful to ensure that a site is always responsive
(and that the app domain or worker process has not paged out due to
lack of external HTTP requests).”
Easiest way to keep a website warm is to call it regularly using the Scheduler feature in Windows Azure Mobile Services.
You simply write a script in the Scheduler that pings your website every x minutes.
Here's a post covering how to do that: http://fabriccontroller.net/blog/posts/job-scheduling-in-windows-azure/
The Windows Azure Web Sites are still in preview, so there is currently no SLA with that service.
The Web Sites do idle out when in free or in Shared mode, which is likely what you are seeing. When the site idles out it actually is removed from memory, and indeed the IIS process host running the site is shut down. This is how they can get the density of hosting 100 sites on the same VM.
You can find a lot of info on the Channel9 site about why this is the case, or, as a shameless plug, here is an article that talks about how the process is handled.
Now, you mentioned that you were using a keepalivecontroller, but what exactly do you mean by that? I use pingdom.com to contantly request data for one of my websites, and that seems to do pretty well. It is still possible that a request doesn't come in and the idle time is met which then cycles the site. It is also possible that even if you always have the site running that the VM the site sites on needs to have the underlying OS updated, in which case Azure would then move the site process to another VM, which could also cause the slow start up on the next request.
I'd start logging your application start ups and then look through your logs to see how often that is happening.
If you only need to warm it up once (vs keeping it warm) and are mostly trying to prevent your customers experience page cold starts, I believe the correct tool is IIS Application Initialization. You can configure it with a list of urls to hit before it deems the app ready for action.
My site is suffering from page cold starts and that is severely magnified in Azure Websites (even on an S3), but it is absolutely speedy after its served that first time thanks to several layers of caching (our inefficient use of Umbraco's dynamic nodes query language creates a lot of database churn--which we're cleaning up opportunistically).
From what I've read and my own web.config attempts this is still not available in Azure Websites. I've asked Microsoft for it here: MS IDEA: Application Initialization to warm up specific pages when app pool starts. Please consider voting for it.
For each service/site you need to go to "Configure", then switch "Always On" to ON. Also make sure you click Save; it took my website about 2 minutes before noticing the change.
Why this is not the default is kind of mind boggling, because my setup on HostGator was running much faster than Azure. I guess Microsoft is figuring if nobody is accessing your site, it's okay if it has a long load time.