Laravel check if artisan command is running which triggers an SSE [closed] - laravel

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
I'm using a privately made SDK by some company where I have to get messages from using an SSE stream. Right now I run my function in my controller which sends a GET to the server and keeps the connection alive. I'm triggering this from my console. I've made a command php artisan fetch:messages which starts calls my controller method and starts the connection.
Right now, if I close my console the SSE stream also closes. How can I keep this alive when I'm away? Because this has to be active at all times.
I tried making a laravel schedule that triggers php artisan fetch:messages command every few minutes but I cannot see if the previous command is active or not.

Related

Should a Go app panic if it doesn't find external dependencies/services [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
How should a Golang app handle missing external dependencies ?
When a app starts and it doesn't find the database it is supposed to persist the data on, knowing the app is useless in that state, should I panic the app ?
I can otherwise log infinitely something, print to stderr, or use another method to notify, but I'm not sure when to choose each method.
An application that has no access to the external network service should not panic. This should be expected as networks tend to fail. I would wrap the error and pass it further.
Consider the following scenario. You have multiple application servers connected to two database servers. You are upgrading the database servers one at a time. When one is turned off half of your application servers panicked and crashed. You upgrade the second database server and every application server is gone now. Instead, when the database is not available just report an error for instance by sending HTTP status 500. If you have a load balancer it will pass the request to the working applications servers. When the database server is back, the application servers reconnect and continue to work.
Another scenario, you are running an interactive application that processes a database to create a report. The connection is not available. The application panicked and crashed. From the user perspective, it looks like a bug. I would expect a message that connection cannot be established.
In the standard library it is accepted to panic when an internal resource is not available. See template.Must. This means something is wrong with the application itself.
Here's an example from the Go standard library.
Package crypto
import "crypto"
func (Hash) New
func (h Hash) New() hash.Hash
New returns a new hash.Hash calculating the given hash function. New
panics if the hash function is not linked into the binary.

Is there any specific reason for why some of the error codes are not recording in the alert log? [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 6 years ago.
Improve this question
I am working in oracle database and I am new to oracle. Can anyone please tell me why some of the error codes are not getting recorded in the alert log.
Thanks in advance.
The alert log includes the following items:
All internal errors (ORA-600), block corruption errors (ORA-1578),
and deadlock errors (ORA-60) that occur
Administrative operations, such as CREATE, ALTER, and DROP
statements and STARTUP, SHUTDOWN, and ARCHIVELOG statements
Messages and errors relating to the functions of shared server and
dispatcher processes
Errors occurring during the automatic refresh of a materialized view
The values of all initialization parameters that had nondefault
values at the time the database and instance start
So if your error doesn't come under these sections, that won't be logged in alert log
For more info on alert log

Meteor: WebSocket is already in CLOSING or CLOSED state

I don't really know if this is just Meteor of Angular-Meteor, but I receive this error message a lot (in client console). Of course this should be in response to bad code on my part. However I am wondering when should this appear and what situation it tries to describe?
Thanks and bye ...
I think it happens on auto-refresh of the local meteor server (ie: each time you modify a file).

Queues in Laravel 4.2 [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I am working in a project developed on Laravel 4.2.
I am not able to get the idea about queuing service of laravel. I have read many documents about it but still things are not clear.
Should I compare Queue and cron job ?
when we put a cron job on server, We mention a time when the cron will run. But in the case of queue I could not find the place where time of run is mentioned.
There are some files in App/command directory and code is running on my server but I am helpless to find the time of run OR how to stop these queues.
Please guide me about this problem.
Queue is a service where you add the tasks for later.
Generally you ask other service provider like iron.io to call your application to have the task processes asynchronously and repeat the call if it fails the first time. This allows you to respond to the application user quickly and leave the task to be processed in the background.
If you use local sync driver then task will be done immediately during the same request.

High CPU usage with php command line scripts, is php-fpm the solution? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
Update (2013 Oct, 1st):
Today I finally found out what was the issue. After debugging for hours with xdebug and trying to execute multiple scripts I noticed that the body of the script is not the problem. When executing tests worker with a sleep of 10-20 seconds I've noted that the CPU was idling most of the time and so deducted that was consuming the most of the CPU is to bootstrap Symfony.
My scripts were very quickly executed and killed to pawn to new script, etc. I've fixed it adding a do{}while() that is exiting after a random amount of seconds (to avoid all the worker to restart at the same time).
I've reduced the load from an average of 35-45% to an average of 0.5-1.5% That's a HUGE improvement. To Resume Symfony is bootstrapped once and after the script is just waiting until a random timeout to kill itself and launch a new instance of itself. This is to avoid script to hang or the database connection to timeout, etc.
If you have a better solution do not hesitate to share. I'm so happy to go from 100% CPU usage (x4 servers because of the auto-scaling) to less than 1% (and only one server) for the same amount of work, it's even faster now.
Update (2013 Sep, 24th):
Just noticed that the console component of Symfony is using dev environment by default.
I've specified prod in the command line: ./app/console --env=prod my:command:action and I divide by 5 the execution time which is pretty good.
Also I have the feeling that curl_exec is eating a lot of CPU but I'm not sure.
I'm trying to debug the CPU usage using xdebug, reading the generated cachegrind, but there is no reference of CPU cycle used per function, class, ... Only the time spent and memory used.
If you want to use xdebug in a PHP command line just use #!/usr/bin/env php -d xdebug.profiler_enable=On at the top of the script
If anyone has a tip to debug this with xdebug I'll be happy to hear it ;)
I'm asking this question without real hope.
I have a server that I use to run workers to process some background tasks. This server is an EC2 server (m1.small) inside an auto-scaling group with high CPU alert setup.
I have something like 20 workers (php script instance) waiting for jobs to be processed. To run the script I'm using the console component of Symfony 2.3 framework.
There is not much happening in the job, fetching data from URL, looping over the results and insert it row by row (~1000 rows per job) in MySQL (RDS server).
The thing is that with 1 or 2 workers running, the CPU is at 100% (I don't think it's like at 100% all the time but it's spiking every second or so) which cause the auto-scaling group to launch new instances.
I'd like to reduce the CPU usage which is not justified at all. I was looking at php-fpm (fastCGI) but it looks like it's for web servers only. PHP client wouldn't use it? right?
Any help would be appreciated,
Cheers
I'm running PHP 5.5.3 with FPM SAPI and as #datasage pointed out in his comment this would only affect the web-based side of things. You run a php -v command on CLI you'd notice:
PHP 5.5.3 (cli) (built: Sep 17 2013 19:13:27)
So FPM isn't really part of the CLI stuff.
I'm also running a similar situation which you are, except I'm running jobs via Zend Framework 2. I've found that running jobs which loop over information can be resource intensive at times but I've also found that it was caused by the way I had originally developed that loop myself. It had nothing to do with PHP in general itself.
I'm not sure about your setup, but in one of my jobs which runs forever I've found this works out the best and my server load is almost null.
[root#router ~]$ w
12:20:45 up 4:41, 1 user, load average: 0.00, 0.01, 0.05
USER TTY LOGIN# IDLE JCPU PCPU WHAT
root pts/0 11:52 5.00s 0.02s 0.00s w
Here is just an "example":
do {
sleep(2);
// Do your processing here
} while (1);
Where // Do your processing here I'm actually running several DB Queries, processing files, and running server commands based on the job requirements.
So in short, I wouldn't blame or think that PHP-FPM is causing your problems but most likely I'd start looking at how you've developed your code to run and make the necessary changes.
I currently have four jobs right now which run forever and is continuously looking at my DB for jobs to process and the server load has never spiked. I've even tested this with 1,000 jobs pending.

Resources