is there any way on how to save a Whole batch of file using eloquent ?
For example I have a 1000 items to be saved, The traditional way is using foreach and saving it.
What I foresee is if the internet connection is down during the middle of the saving process, the remainder of the items that still not done will not be save, is there any way to address this issue ?
What I think feasible is using stored procedure but the bandwidth will kill us.
What you are looking for is transaction, please take a look at https://laravel.com/docs/5.4/database#database-transactions.
You may use the transaction method on the DB facade to run a set of operations within a database transaction. If an exception is thrown within the transaction Closure, the transaction will automatically be rolled back. If the Closure executes successfully, the transaction will automatically be committed.
Another way is using the insert() function to create data at once.
DB::table('users')->insert([
['email' => 'taylor#example.com', 'votes' => 0],
['email' => 'dayle#example.com', 'votes' => 0]
]);
Related
I am currently making a turn based strategy game with laravel (mysql DB with InnoDB) engine and want to make sure that I don't have bugs due to race conditions, duplicate requests, bad actors etc...
Because these kind of bugs are hard to test, I wanted to get some clarification.
Many actions in the game can only occur once per turn, like buying a new unit. Here is a simplified bit of code for purchasing a unit.
$player = Player::find($player_id);
if($player->gold >= $unit_price && $player->has_purchased == false){
$player->has_purchased = true;
$player->gold -= $unit_price;
$player->save();
$unit = new Unit();
$unit->player_id = $player->id;
$unit->save();
}
So my concern would be if two threads both made it pass the if statement and then executed the block of code at the same time.
Is this a valid concern?
And would the solution be to wrap everything in a database transaction like https://betterprogramming.pub/using-database-transactions-in-laravel-8b62cd2f06a5 ?
This means that a good portion of my code will be wrapped around database transactions because I have a lot of instances that are variations of the above code for different actions.
Also there is a situation where multiple users will be able to update a value in the database so I want to avoid a situation where 2 users increment the value at the same time and it only gets incremented once.
Since you are using Laravel to presumably develop a web-based game, you can expect multiple concurrent connections to occur. A transaction is just one part of the equation. Transactions ensure operations are performed atomically, in your case it ensures that both the player and unit save are successful or both fail together, so you won't have the situation where the money is deducted but the unit is not granted.
However there is another facet to this, if there is a real possibility you have two separate requests for the same player coming in concurrently then you may also encounter a race condition. This is because a transaction is not a lock so two transactions can happen at the same time. The implication of this is (in your case) two checks happen on the same player instance to ensure enough gold is available, both succeed, and both deduct the same gold, however two distinct units are granted at the end (i.e. item duplication). To avoid this you'd use a lock to prevent other threads from obtaining the same player row/model, so your full code would be:
DB::transaction(function () use ($unit_price) {
$player = Player::where('id',$player_id)->lockForUpdate()->first();
if($player->gold >= $unit_price && $player->has_purchased == false){
$player->has_purchased = true;
$player->gold -= $unit_price;
$player->save();
$unit = new Unit();
$unit->player_id = $player->id;
$unit->save();
}
});
This will ensure any other threads trying to retrieve the same player will need to wait until the lock is released (which will happen at the end of the first request).
There's more nuances to deal with here as well like a player sending a duplicate request from double-clicking for example, and that can get a bit more complex.
For you purchase system, it's advisable to implement DB:transaction since it protects you from false records. Checkout the laravel docs for more information on this https://laravel.com/docs/9.x/database#database-transactions As for reactive data you need to keep track of, simply bind a variable to that data in your frontEnd, then use the variable to update your DB records.
In the case you need to exit if any exception or error occurs. If an exception is thrown the data will not save and rollback all the transactions. I recommand to use transactions as possible as you can. The basic format is:
DB::beginTransaction();
try {
// database actions like create, update etc.
DB::commit(); // finally commit to database
} catch (\Exception $e) {
DB::rollback(); // roll back if any error occurs
// something went wrong
}
See the laravel docs here
I am working with Meilisearch in my Laravel application, and am trying to figure out the best way I can mock my Meilisearch indexes.
The tests themselves are currently not that advanced, for example my test to simply create an index looks like this
public function test_create_index()
{
$model = Package::factory()->create([
'language' => 'DA'
]);
sleep(1);
$this->assertIsArray($this->meiliClient->index('Package_DA')->getDocument($model->id));
$this->meiliClient->index('Package_DA')->deleteDocument($model->id);
}
I am using Laravel Observer to fire a method when a new model is created, which calls my Meilisearch Service to create a new index and insert a new record.
The downside here though is that in order for Meilisearch to have time to register the new record in the index, I must sleep the script for 1 second, or else it won't have time to update. This is something I have to do everytime I call the Meilisearch client. Combine this with 8 calls in total, and my test file will take over 8 seconds to run.
Next issue is, as shown in the last code line, that I have to manually delete the created record from the index once the test is run. What I would like to do is to fake the Meilisearch calls somehow to not actually create the records, but simply test if the creation succeeded, similar to when you fake Events or Job dispatches.
I am considering making a separate trait to handle this, but I am not sure if it is even possible to achieve the faking of Meilisearch Indexes in the way that I want it.
I'm open for ideas or suggestions on how this could be done, or if it would even be possible
Thanks
I am using a transaction accompanied by updateOrCreate(). As you can infer, things could go awry if multiple connections access and update the same id, a race condition in writing into the DB.
I am also locking that specific row that is being dealt with in my model.
My abbreviated code looks as follows in the file writing to the DB:
DB::beginTransaction();
DB::table('myTable')->where('my_id', $cells[0])->lockForUpdate()->get();
$outcome = Outcome::updateOrCreate([
'my_id' => $cells[0]
],
[
'etc',$cells[1],
'etc-etc',$cells[2],
]);
DB::commit();
Please let me know if this is true: As far as I understood, I am getting a lock for that specific row
so transaction n is on a queue awaiting transaction n - 1 to end, sequentially.
This test is supposed to simulate multiple requests, that are writing to the DB. Please let me know if more clarity is needed to answer this.
I would like to know how to test this. Do I run multiples of the same TestCase? If so, how? Or maybe there is another way to achieve this kind of concurrent test?
So I have Laravel 5.2 project where Redis is used as cache driver.
There is a controller, which has method that connects Redis and increases a value and adds value to the set each time this method is called, just like
$redis = Redis::connection();
$redis->incr($value);
$redis->sadd("set", $value);
But the problem is that sometimes there are many connections and many calls for this method at the same time, and there is a data loss, because if two callers call this method while $value is 2, after incr it will become 3, but should be 4 (after two incrs basically).
I have thought about using Redis transactions, but I can't imagine when should I call multi command to start a queue and when to exec it.
Also I had an idea to collect all incrs and sadds as strings to another set and then transact them with cron job, but it would cost too much RAM.
So, any suggestions, how can this data loss be avoided?
Laravel uses Predis as a Redis driver.
To execute a transaction with Predis you have to invoke the transaction method of the driver and give it a callback:
$responses = $redis->transaction(function ($tx) {
$redis->incr($value);
$redis->sadd("set", $value);
});
I am trying to write a laravel function that gets lots of records (100,000+) from one database and puts it in another database. Towards that end, I need to query my database and see if the user already exists. I repeatedly call this code:
$users = User::where('id', '=', 2)->first();
And then after that happens a few hundred times, I run out of memory. So, I made a minimalist example of it using up all the available memory, and it looks like this:
<?php
use Illuminate\Console\Command;
class memoryleak extends Command
{
protected $name = 'command:memoryleak';
protected $description = 'Demonstrates memory leak.';
public function fire()
{
ini_set("memory_limit","12M");
for ($i = 0; $i < 100000; $i++)
{
var_dump(memory_get_usage());
$this->external_function();
}
}
function external_function()
{
// Next line causes memory leak - comment out to compare to normal behavior
$users = User::where('id', '=', 2)->first();
unset($users);
// User goes out of scope at the end of this function
}
}
And the output of this script (executed by 'php artisan command:memoryleak') looks something like this:
int(9298696)
int(9299816)
int(9300936)
int(9302048)
int(9303224)
int(9304368)
....
int(10927344)
int(10928432)
int(10929560)
int(10930664)
int(10931752)
int(10932832)
int(10933936)
int(10935072)
int(10936184)
int(10937320)
....
int(12181872)
int(12182992)
int(12184080)
int(12185192)
int(12186312)
int(12187424)
PHP Fatal error: Allowed memory size of 12582912 bytes exhausted (tried to allocate 89 bytes) in /Volumes/Mac OS/www/test/vendor/laravel/framework/src/Illuminate/Database/Connection.php on line 275
If I comment out the line "$users = User::where('id', '=', 2)->first();" then the memory usage stays stable.
Does anyone have any insight as to why this line would use memory like this, or know a smarter way to accomplish what I am trying to do?
Thank you for your time.
I recreated your script and stepped through it with a debugger because I couldn't fathom what sort of horrible thing would cause this type of memory issue. As I stepped through, I came across this:
// in Illuminate\Database\Connection
$this->queryLog[] = compact('query', 'bindings', 'time');
It seems every query you run in Laravel is stored in a persistent log, which explains your increasing memory usage after each query. Just above that, is the following line:
if ( ! $this->loggingQueries) return;
A little more digging determined that the loggingQueries property is set to true by default, and can be changed via the disableQueryLog method, so that means, if you call:
DB::connection()->disableQueryLog();
before you're going to execute all your queries, you won't see ever increasing memory usage; it solved the problem when I ran my test based on your example code. When you're done, if you don't want to affect the rest of the application you could call
DB::connection()->enableQueryLog();
to renable logging.
I can't say why it isn't releasing memory. Your best bet is to follow the code and learn how it does what it does for that one. Or ask Taylor.
As for other things you can do:
Cache the query If you're calling the same query over and over and over, then use the query cache. It's as simple as adding ->remember($time_to_cache) to your query.
Make the DBMS do all the hard work. Ideally, you'd just do an insert into select statement, but that gets hairy when you're crossing databases. In lieu of that, batch both the select and the insert queries so that you're making fewer calls to the databases and creating fewer objects. This offloads more of the heavy lifting to the database management system, which is arguably more efficient at these types of tasks.