I'm using database as queue driver in my system.
How can I delete a queued job that is stored in my jobs table?
Thanks
Implement the Illuminate\Queue\InteractsWithQueue trait which gives you access to the delete() method.
More information in the API and in the docs under "Manually accessing the queue".
Related
I have multiple databases in my project based on company we are giving new database for that company.i am developing automation workflows in my current project for that i was planned to implemented queue-jobs concept to achieve this one.
we are maintaining one more database which will contain all databases list (companies which are using them).i was little confused how to approach these kind of scenarios,either i have to maintain jobs table in my commonDatabase or if i have to create jobs table inside each database separately.
Note:- EVery time user tried to login he has to give company name(All requests we are sending in headers) that indicates database name.
My doubts are:-
i created jobs table in each database but it's not inserting records
in particular database,instead of it's inserting in commonDatabase
jobs table?
what is the best way to acheive this kind of scenario ?
if any user logged out the queue will run background or not ?
The thing I understand from you question is that you want to covert your project to multi-tenant multi-database and every company will generate a request to build tenant for them. The answers of your question are following:
I created jobs table in each database but it's not inserting records in particular database,instead of it's inserting in commonDatabase jobs table?
I must said you to watch this youtube play list.
If the Job is related to a company process i.e. you want to process any company invoice email then you will dispatch job in company database and if job is related to commonDatabase i.e. you want to configure any company database then run migrations & seeder into it, then it should be dispatch in commonDatabase.
if any user logged out the queue will run background or not?
yes, the queue will still run in background because the queue worker run on server and it doesn't have any concern with login session or any other authentication medium. You must need to read following articles/threads
Official Laravel Doc on queue
How to setup laravel queue worker
I have started using hbase recently, just wanted to check here if anyone came across the scenario which i have been facing right now.
I have a webservice deployed in couple of servers and accessing the HBase to update a field. Now this field update is conditional means i have to read the field from hbase and if its value is "A",then update to "B". If the concurrent update is "C" do not update. But since different servers and concurrent requests, possible that both read existing value as A and one update with B and other with "C".
If there are requests coming concurrently from different servers, then there is no use of thread level locking. Also multiple request from same server.
Is there a way to lock at the HBase level, so that i can aquire the lock at service layer and lock the row and then update it.
There is RowLock in HBase API, but we are using the higher version (1.1.2.3) of hbase where that class is removed.
Appreciate if someone could show a direction!!
Thanks in advance.
I'm doing an application in Laravel that contains several connections to different databases, in which each one reads a service audit table. The application is to visualize logs of different applications.
To improve the reading speed, could be possible download every X minutes all data from different bases to a local base in Redis and read the queries directly from it?
You can do this via scheduled tasks:
https://laravel.com/docs/5.7/scheduling#scheduling-artisan-commands
This will allow you to run an Artisan Command
https://laravel.com/docs/5.7/artisan
In this command you can get the data from your DB and save it to your Redis table
To access mulitple Databases follow the details here:
https://laravel.com/docs/5.7/database#read-and-write-connections
And to setup redis here is the docs
https://laravel.com/docs/5.7/redis
All that you will need to do is track what you have transfered.
Get what you have not transferred and then save that data to the Redis table
I am using Laravel 5.5. I would like to store session data in a table. When the session is over, the data should be deleted. Can anyone help me accomplish this task?
I would suggest using Redis for storing and destroying session data within a Laravel application. One of the most apparent use cases for Redis is using it as a session cache.
If you are determined to store your session data in a database, check out this documentation that gives you the option on how to handle session data.
You need to change the session configuration file is stored in config/session.php to the database option and created the needed table.
Current Setup:
SQL Server OLTP database
AWS Redshift OLAP database updated from OLTP
via SSIS every 20 minutes
Our customers only have access to the OLAP Db
Requirement:
One customer requires some additional tables to be created and populated to a schedule which can be done by aggregating the data already in AWS Redshift.
Challenge:
This is only for one customer so I cannot leverage the core process for populating AWS; the process must be independent and is to be handed over to the customer who do not use SSIS and don't wish to start. I was considering using Data Pipeline but this is not yet available in the market in which the customer resides.
Question:
What is my alternative? I am aware of numerous partners who offer ETL like solutions but this seems over the top, ultimately all I want to do is execute a series of SQL statements on a schedule with some form of error handling/ alert. Preference of both customer and management is to not use a bespoke app to do this, hence the intended use of Data Pipeline.
For exporting data from AWS Redshift to another data source using datapipeline you can follow a template similar to https://github.com/awslabs/data-pipeline-samples/tree/master/samples/RedshiftToRDS using which data can be transferred from Redshift to RDS. But instead of using RDSDatabase as the sink you could add a JdbcDatabase (http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-jdbcdatabase.html). The template https://github.com/awslabs/data-pipeline-samples/blob/master/samples/oracle-backup/definition.json provides more details on how to use the JdbcDatabase.
There are many such templates available in https://github.com/awslabs/data-pipeline-samples/tree/master/samples to use as a reference.
I do exactly the same thing as you, but I use lambda service to perform my ETL. One drawback of lambda service is, it can run max of 5 mins (Initially 1 min) only.
So for ETL's greater than 5 minutes, I am planning to set up PHP server in AWS and with SQL injection I can run my queries, scheduled at any time with help of cron function.