How to know or database record was deleted - laravel

I'm starting some php workers at the same time and each of them takes a job to do. These jobs are written in database table and when worker takes one - it deletes the record. My code:
$job = Job::first();
if (!empty($job) and $job->delete()==true) {
// so something
}
But the problem is that still some workers take the same $job to perform at the same time! How this can happen?
UPDATE
I'm using Postgres Database

Despite above comments seeking for a better solution, you should be able to solve it this way:
$job = Job::first();
if ($job && Job::where('id', $job->id)->delete()) {
// do something else ...
}
Explanation: Job::where('id', $job->id)->delete() will delete all job records with the given id and return the number of affected records. This may be either 0 or 1, or false and true respectively. So this should actually work, if your database handles the concurrent delete properly.

Very simple --- it is a race condition.
In order to avoid that you will need to implement some sort of database locking. You didn't indicate what database you were using, so I'm going to assume that you're using MySQL.
Your select statement needs to Lock the rows you've selected. In MySQL you do this with
DB::beginTransaction();
// Queries you need to make with eloquent
SELECT * FROM queue LIMIT 1 FOR UPDATE;
// use id if you got a row. If not just commit immediately.
DELETE FROM queue WHERE id = $id;
DB::commit();

Related

How to make command in Laravel that can go thousands rows and update all data with Job that can update grouped by 50 data

I want to make command that can go through all data from database and update some data in API for every row.. I am afraid of stopping command or cron and all data will not be updated. And when command run again it will go from first id.
For example if I have ids: [1,2,3,4,5,6,7,8,...100,101,..1000,1001,...]
I want to first update 50 rows(1-50), then (51-100)..and so on.. How can I make Job that will save from which to which ID should be updated? I hope so that someone can help me..
I tried something like this
$woocount=WoocommerceProduct::where('sync_status', 'IN_SYNC')->count();
if($woocount>100){
$max=intval($woocount/100);
}else{
$max=$woocount;
}
// dd($max);
$i=1;
while($i<=$woocount){
dump($max);
$wooproducts=WoocommerceProduct::where([['sync_status', 'IN_SYNC'],['id','>=',$i], ['id', '<=', $max]])->get();
dump($wooproducts->pluck('id')->toArray());
CheckStockJob::dispatch($wooproducts);
$i=$max+1;
$max=$max+100;
}
You need to select the data, and chuck it.
CheckStockJob should accept collection of WoocommerceProduct.
$wooproducts = WoocommerceProduct::where('sync_status', 'IN_SYNC')->get();
$wooproduct_chunks = $wooproducts->chunks(50);
$wooproduct_chunks->each(function ($products) {
CheckStockJob::dispatch($products);
});

How to insert multiple record without loop in laravel

I have I need to insert multiple record in database . Currently I am inserting with loop which is causing timeout when record is big. Is there any way that we dont use loop?
$consignments = Consignment::select('id')->where('customer_id',$invoice->customer_id)->doesntHave('invoice_charges')->get();
foreach($consignments as $consignment){
InvoiceCharge::create(['invoice_id'=>$invoice->id,'object_id'=>$consignment->id,'model'=>'Consignment']);
}
consignment has hasOne relation in model
public function invoice_charges()
{
return $this->hasOne('App\Models\Admin\InvoiceCharge', 'object_id')->where('model', 'Consignment');
}
How about this:
$consignments = Consignment::select('id')->where('customer_id',$invoice->customer_id)->doesntHave('invoice_charges')->get();
foreach($consignments as $consignment){
$consignment_data[] = ['invoice_id'=>$invoice->id,'object_id'=>$consignment->id,'model'=>'Consignment'];
}
InvoiceCharge::insert($consignment_data);
In this way you enter with one query rather than loop. Just check if consignment_data array is ok.
If you want to save time, but can give more memory, you can use Cursor,
Cursor: You will use PHP Generators to search your query items one by one. 1)It takes less time 2) Uses more memory
$consignments = Consignment::select('id')->where('customer_id',$invoice->customer_id)->doesntHave('invoice_charges')->cursor();
foreach($consignments as $consignment){
InvoiceCharge::create(['invoice_id'=>$invoice->id,'object_id'=>$consignment->id,'model'=>'Consignment']);
}
You can refer from here

Laravel count raw query always return zero

I am working with multiple databases in single project. When I run simple raw query to get another database table count but it always return zero instead of actual counts. Even I have more than 1 million records in table.
I run raw query in following formats but result is zero
$dbconn = \DB::connection("archive_db");
$dbconn->table('activities_archived')->count()
$sql = "SELECT COUNT(*) as total FROM activities_archived";
$result = \DB::connection("archive_db")->select(\DB::raw($sql));
Event I have set the database connections strict option to false but still facing same issue.
Now I am totaly stuck that why this issue is coming
$someModel->setConnection('mysql2');
$something = $someModel->count();
return $something;

Best case for seeding large data in Laravel

I have a file with over 30,000 records and another with 41,000. Is there a best case study for seeding this using laravel 4's db:seed command? A way to make the inserts more swift.
Thanks for the help.
Don't be afraid, 40K rows table is kind of a small one. I have a 1 milion rows table and seed was done smoothly, I just had to add this before doing it:
DB::disableQueryLog();
Before disabling it, Laravel wasted all my PHP memory limit, no matter how much I gave it.
I read data from .txt files using fgets(), building the array programatically and executing:
DB::table($table)->insert($row);
One by one, wich may be particularily slow.
My database server is a PostgreSQL and inserts took around 1.5 hours to complete, maybe because I was using a VM using low memory. I will make a benchmark one of these days on a better machine.
2018 Update
I have run into the same issue and after 2 days of headache, I could finally write script to seed 42K entries in less than 30s!
You ask How?
1st Method
This method assumes that you have a database with some entries in it(in my case were 42k entries) and you want to import same into other database. Export your database as CSV files with header names and put the file into the public folder of your project and then you can parse the file and insert one by one all the entries in new database via seeder.
So your seeder will look something like this:
<?php
use Illuminate\Database\Seeder;
class {TableName}TableSeeder extends Seeder
{
/**
* Run the database seeds.
*
* #return void
*/
public function run()
{
$row = 1;
if (($handle = fopen(base_path("public/name_of_your_csv_import.csv"), "r")) !== false) {
while (($data = fgetcsv($handle, 0, ",")) !== false) {
if ($row === 1) {
$row++;
continue;
}
$row++;
$dbData = [
'col1' => '"'.$data[0].'"',
'col2' => '"'.$data[1].'"',
'col3' => '"'.$data[2].'"',
so on...how many columns you have
];
$colNames = array_keys($dbData);
$createQuery = 'INSERT INTO locations ('.implode(',', $colNames).') VALUES ('.implode(',', $dbData).')';
DB::statement($createQuery, $data);
$this->command->info($row);
}
fclose($handle);
}
}
}
Simple and Easy :)
2nd method
In case you can modify the settings of your PHP and allocate a big size to aprticular script then this method will work as well.
Well basically you need to focus on three major steps:
Allocate more memory to script
Off Query Logger
Divide your data in chunks of 1000
Iterate through data and use insert() to create chunks of 1K at a time.
So if I combine all of the above mentioned steps in a seeder, your seeder will look something like this:
<?php
use Illuminate\Database\Seeder;
class {TableName}TableSeeder extends Seeder
{
/**
* Run the database seeds.
*
* #return void
*/
public function run()
{
ini_set('memory_limit', '512M');//allocate memory
DB::disableQueryLog();//disable log
//create chunks
$data = [
[
[
'col1'=>1,
'col2'=>1,
'col3'=>1,
'col4'=>1,
'col5'=>1
],
[
'col1'=>1,
'col2'=>1,
'col3'=>1,
'col4'=>1,
'col5'=>1
],
so on..until 1000 entries
],
[
[
'col1'=>1,
'col2'=>1,
'col3'=>1,
'col4'=>1,
'col5'=>1
],
[
'col1'=>1,
'col2'=>1,
'col3'=>1,
'col4'=>1,
'col5'=>1
],
so on..until 1000 entries
],
so on...until how many entries you have, i had 42000
]
//iterate and insert
foreach ($data as $key => $d) {
DB::table('locations')->insert($d);
$this->command->info($key);//gives you an idea where your iterator is in command line, best feeling in the world to see it rising if you ask me :D
}
}
}
and VOILA you are good to go :)
I hope it helps
I was migrating from a different database and I had to use raw sql (loaded from an external file) with bulk insert statements (I exported structure via navicat which has the option to break up your insert statements every 250KiB). Eg:
$sqlStatements = array(
"INSERT INTO `users` (`name`, `email`)
VALUES
('John Doe','john.doe#gmail.com'),.....
('Jane Doe','jane.doe#gmail.com')",
"INSERT INTO `users` (`name`, `email`)
VALUES
('John Doe2','john.doe2#gmail.com'),.....
('Jane Doe2','jane.doe2#gmail.com')"
);
I then looped through the insert statements and executed using
DB::statement($sql).
I couldn't get insert to work one row at a time. I'm sure there's alternatives that are better but this at least worked while letting me keep it within Laravel's migration/seeding.
I had the same problem today. Disabling query log wasn't enough. Looks like an event also get fired.
DB::disableQueryLog();
// DO INSERTS
// Reset events to free up memory.
DB::setEventDispatcher(new Illuminate\Events\Dispatcher());

Visual studio C# Linq bulk insert/update

I have recently developed a C# application using Linq.
I am getting from an external database a list of profiles I need to process, some are new and some are already in the database, and need to be updated.
What I do today is go over the profile list and check each profile if such exists I update otherwise I insert - this solution is working fine.
I am sure there is a way to use bulk insert/update something like UPDATE ON DUPLICATE, this way I can save time since the files I get are huge and bulk insert/update is known to have better performance. I would like to avoid the iteration I am now using.
insertall doesn't work for already stored rows, I need the combination of both update and insert
Here is my code, Your help is highly appreciated.
foreach (Profile tmpProfile in profiles)
{
try
{
var matchedProfile = (from c in db.ProfileEntities
where c.ProfileId == tmpProfile.Id
select c).SingleOrDefault();
if (matchedProfile == null)
{
//Insert
db.ProfileEntities.InsertOnSubmit(EntityMapper.ToEntity(tmpProfile));
}
else
{
//Update
EntityMapper.ToEntity(ref matchedProfile, tmpProfile);
}
}
catch (System.Data.SqlServerCe.SqlCeException sqlExec)
{
}
catch (Exception e)
{
}
}
db.SubmitChanges();
One possible optimisation would be to create a list of all the items that you have from the external application, and then read all items from the database that match at once, instead of doing multiple round trips.
You can then update all of those, insert all of the ones that are left and call SubmitChanges at the end - you will then have 2 round trips to the database instead of one per profile retreived externally.
I don't know of any bulk update or insert features in Linq to SQL

Resources