Laravel build a queue object using redis cache - laravel

I'm trying to figure something out, i built a logger that keeps track of certain events happening. These happen fairly often (ie 1-500 times a minute).
To optimize this properly, i'm storing to redis and then i have a task that grabs the queue object from redis, clears the cache key and will insert each individual entry into the db.
I have the enqueue happening in the destructor when my logger is done observing the data.
For obvious reasons, i dont want a db write happening every time, so to speed this up i write to redis and then flush to db on a task.
The issue is that my queue implementation is as follows:
fetch object with key xyz from redis
append new entry to object
store object with key xyz in redis
This is inefficient, i would like to be able to just enqueue straight into redis. Redis has a list type built-in which i could use, but laravel redis driver doesn't support it. I tried to figure out a way to send raw commands to redis from laravel, but I can't seem to make it work.
I was thinking of just storing keys into a tag in redis, but i quickly discovered laravel's implementation of tags is not 'proper' and will not allow fetching tagged items without a key, so i can't use tags as a queue and each key an object in it.
If anyone has any idea how i could either talk to redis directly and make use of list, or if there's something i missed, it would really help.

EDIT
While the way below does work, there is a more proper way of doing it using a facade for redis as a reply mentioned, more on it here: Documentation
Okay, if anyone runs into this. I did not see it documented properly anywhere, you need to do the following:
-> get an instance of redis through the Cache facade laravel provides.
$redis = Cache::getRedis();
-> call redis functions through it.
$redis->rpush('test.key', 'data');
I believe you'll want predis as your driver.
If you're building a log driver, you'll want these 2 functions implemented:
public function cacheEnqueue($data) : int
{
$size = $this->_redis->lpush($this->getCacheKey("c_stack_queue"), $data);
if ($size > 2000)
{
//prevent runaway logs
$this->_redis->ltrim($this->getCacheKey("c_stack_queue"), 0, 2000);
}
return $size;
}
/**
* Fetch items from stack. Multi-thread safe,
*
* #param int $number Fetch last x items.
*
* #return array
*/
public function cachePopMulti(int $number) : array
{
$data = $this->_redis->lrange($this->getCacheKey("c_stack_queue"), -$number, -1);
$this->_redis->ltrim($this->getCacheKey("c_stack_queue"), 0, -1 * ($number + 1));
return $data;
}
Of course, write your own key generator getCacheKey()

Related

Laravel cache DatabaseStore clean up

I use cache DatabaseStore for managing mutex files with Laravel scheduler.
I modified app/Console/Kernel.php a bit to make this work.
protected function defineConsoleSchedule()
{
$this->app->bind(SchedulingMutex::class, function() {
$container = Container::getInstance();
$mutex = $container->make(CacheSchedulingMutex::class);
return $mutex->useStore('database');
});
parent::defineConsoleSchedule();
}
I need it to be able to run scheduler on multiple servers, but it requires to have a shared storage. Since I have different Redis instances for all servers I decided to use database cache storage which is provided out of the box.
All works fine, but the db table named cache, where all things are stored does not get cleaned up even after cache expired.
Here is some tuples from the table:
key value expiration
laravelframework/schedule-015655105069... b:1; 1539126032
laravelframework/schedule-015655105069... b:1; 1539126654
So the first one, has expiration 1539126032 (2018-10-09 23:00:32), current time is 2018-10-10 08:09:45. I expect it to be cleaned up.
The question is - should I implement something to maintain this table or Laravel should handle it? What I'm doing wrong if it's Laravel's duty?

Biztalk Debatched Message Value Caching

I get a file with 4000 entries and debatch it, so i dont lose the whole message if one entry has corrupting data.
The Biztalkmap is accessing an SQL server, before i debatched the Message I simply cached the SLQ data in the Map, but now i have 4000 indipendent maps.
Without caching the process takes about 30 times longer.
Is there a way to cache the data from the SQL Server somewhere out of the Map without losing much Performance?
It is not a recommendable pattern to access a database in a Map.
Since what you describe sounds like you're retrieving static reference data, another option is to move the process to an Orchestration where the reference data is retrieved one time into a Message.
Then, you can use a dual input Map supplying the reference data and the business message.
In this patter, you can either debatch in the Orchestration or use a Sequential Convoy.
I would always avoid accessing SQL Server in a map - it gets very easy to inadvertently make many more calls than you intend (whether because of a mistake in the map design or because of unexpected volume or usage of the map on a particular port or set of ports). In fact, I would generally avoid making any kind of call in a map that has to access another system or service, but if you must, then caching can help.
You can cache using, for example, MemoryCache. The pattern I use with that generally involves a custom C# library where you first check the cache for your value, and if there's a miss you check SQL (either for the paritcular entry or the entire cache, e.g.:
object _syncRoot = new object();
...
public string CheckCache(string key)
{
string check = MemoryCache.Default.Get(key) as string;
if (check == null)
{
lock (_syncRoot)
{
// make sure someone else didn't get here before we acquired the lock, avoid duplicate work
check = MemoryCache.Default.Get(key) as string;
if (check != null) return check;
string sql = #"SELECT ...";
using (SqlConnection conn = new SqlConnection(connStr))
{
conn.Open();
using (SqlCommand cmd = conn.CreateCommand())
{
cmd.CommandText = sql;
cmd.Parameters.AddWithValue(...);
// ExecuteScalar or ExecuteReader as appropriate, read values out, store in cache
// use MemoryCache.Default.Add with sensible expiration to cache your data
}
}
}
}
else
{
return check;
}
}
A few things to keep in mind:
This will work on a per AppDomain basis, and pipelines and orchestrations run on separate app domains. If you are executing this map in both places, you'll end up with caches in both places. The complexity added in trying to share this accross AppDomains is probably not worth it, but if you really need that you should isolate your caching into something like a WCF NetTcp service.
This will use more memory - you shouldn't just throw everything and anything into a cache in BizTalk, and if you're going to cache stuff make sure you have lots of available memory on the machine and that BizTalk is configured to be able to use it.
The MemoryCache can store whatever you want - I'm using strings here, but it could be other primitive types or objects as well.

Redis multiple connections and multiple incr calls without losing data

So I have Laravel 5.2 project where Redis is used as cache driver.
There is a controller, which has method that connects Redis and increases a value and adds value to the set each time this method is called, just like
$redis = Redis::connection();
$redis->incr($value);
$redis->sadd("set", $value);
But the problem is that sometimes there are many connections and many calls for this method at the same time, and there is a data loss, because if two callers call this method while $value is 2, after incr it will become 3, but should be 4 (after two incrs basically).
I have thought about using Redis transactions, but I can't imagine when should I call multi command to start a queue and when to exec it.
Also I had an idea to collect all incrs and sadds as strings to another set and then transact them with cron job, but it would cost too much RAM.
So, any suggestions, how can this data loss be avoided?
Laravel uses Predis as a Redis driver.
To execute a transaction with Predis you have to invoke the transaction method of the driver and give it a callback:
$responses = $redis->transaction(function ($tx) {
$redis->incr($value);
$redis->sadd("set", $value);
});

store API response data on cache on server php

I am using an API. The resquest-response time on the API takes too long time. So, due to this problem, I want to store the response on the Server but not on the browser. This is because, I want to display the result from cache for similar searches. I know, there are limitations on the accuracy on cached data, but that's another part which I don't want to include here.
I am using laravel framework and using this code for current moment as in the laravel documentation.
$expiresAt = Carbon::now()->addMinutes(10);
Cache::put('key', 'value', $expiresAt);
The problem with this code is, it stores cache on the browser only. But I want to store it on Server. I have heard about the Memcached but could not implement it. I have also heard of apc_store() but I think it stores on local. So, How can I store cache on server?
Similar to Cache::put(), you can use Cache::pull() to check for data saved (on the server).
// Check cache for data
$cachedData = Cache::pull($key);
// Get new data if no cache
if (! $cachedData) {
$newData = 'New Data';
$expiresAt = Carbon::now()->addMinutes(10);
// Save new data to cache
$cachedData = Cache::put($key, $newData, $expiresAt);
}
echo $cachedData;

Laravel / Eloquent memory leak retrieving the same record repeatedly

I am trying to write a laravel function that gets lots of records (100,000+) from one database and puts it in another database. Towards that end, I need to query my database and see if the user already exists. I repeatedly call this code:
$users = User::where('id', '=', 2)->first();
And then after that happens a few hundred times, I run out of memory. So, I made a minimalist example of it using up all the available memory, and it looks like this:
<?php
use Illuminate\Console\Command;
class memoryleak extends Command
{
protected $name = 'command:memoryleak';
protected $description = 'Demonstrates memory leak.';
public function fire()
{
ini_set("memory_limit","12M");
for ($i = 0; $i < 100000; $i++)
{
var_dump(memory_get_usage());
$this->external_function();
}
}
function external_function()
{
// Next line causes memory leak - comment out to compare to normal behavior
$users = User::where('id', '=', 2)->first();
unset($users);
// User goes out of scope at the end of this function
}
}
And the output of this script (executed by 'php artisan command:memoryleak') looks something like this:
int(9298696)
int(9299816)
int(9300936)
int(9302048)
int(9303224)
int(9304368)
....
int(10927344)
int(10928432)
int(10929560)
int(10930664)
int(10931752)
int(10932832)
int(10933936)
int(10935072)
int(10936184)
int(10937320)
....
int(12181872)
int(12182992)
int(12184080)
int(12185192)
int(12186312)
int(12187424)
PHP Fatal error: Allowed memory size of 12582912 bytes exhausted (tried to allocate 89 bytes) in /Volumes/Mac OS/www/test/vendor/laravel/framework/src/Illuminate/Database/Connection.php on line 275
If I comment out the line "$users = User::where('id', '=', 2)->first();" then the memory usage stays stable.
Does anyone have any insight as to why this line would use memory like this, or know a smarter way to accomplish what I am trying to do?
Thank you for your time.
I recreated your script and stepped through it with a debugger because I couldn't fathom what sort of horrible thing would cause this type of memory issue. As I stepped through, I came across this:
// in Illuminate\Database\Connection
$this->queryLog[] = compact('query', 'bindings', 'time');
It seems every query you run in Laravel is stored in a persistent log, which explains your increasing memory usage after each query. Just above that, is the following line:
if ( ! $this->loggingQueries) return;
A little more digging determined that the loggingQueries property is set to true by default, and can be changed via the disableQueryLog method, so that means, if you call:
DB::connection()->disableQueryLog();
before you're going to execute all your queries, you won't see ever increasing memory usage; it solved the problem when I ran my test based on your example code. When you're done, if you don't want to affect the rest of the application you could call
DB::connection()->enableQueryLog();
to renable logging.
I can't say why it isn't releasing memory. Your best bet is to follow the code and learn how it does what it does for that one. Or ask Taylor.
As for other things you can do:
Cache the query If you're calling the same query over and over and over, then use the query cache. It's as simple as adding ->remember($time_to_cache) to your query.
Make the DBMS do all the hard work. Ideally, you'd just do an insert into select statement, but that gets hairy when you're crossing databases. In lieu of that, batch both the select and the insert queries so that you're making fewer calls to the databases and creating fewer objects. This offloads more of the heavy lifting to the database management system, which is arguably more efficient at these types of tasks.

Resources