Laravel cache DatabaseStore clean up - caching

I use cache DatabaseStore for managing mutex files with Laravel scheduler.
I modified app/Console/Kernel.php a bit to make this work.
protected function defineConsoleSchedule()
{
$this->app->bind(SchedulingMutex::class, function() {
$container = Container::getInstance();
$mutex = $container->make(CacheSchedulingMutex::class);
return $mutex->useStore('database');
});
parent::defineConsoleSchedule();
}
I need it to be able to run scheduler on multiple servers, but it requires to have a shared storage. Since I have different Redis instances for all servers I decided to use database cache storage which is provided out of the box.
All works fine, but the db table named cache, where all things are stored does not get cleaned up even after cache expired.
Here is some tuples from the table:
key value expiration
laravelframework/schedule-015655105069... b:1; 1539126032
laravelframework/schedule-015655105069... b:1; 1539126654
So the first one, has expiration 1539126032 (2018-10-09 23:00:32), current time is 2018-10-10 08:09:45. I expect it to be cleaned up.
The question is - should I implement something to maintain this table or Laravel should handle it? What I'm doing wrong if it's Laravel's duty?

Related

Best way to optionally cache responses in Laravel?

I'm building a Laravel API and i was wondering what the best method is for optionally caching responses?
Assuming i have the following UserController in my API:
public function index()
{
return Cache::remember('users', '3600', function() {
return new UserCollection(User::all());
});
}
Now let's say in my app front-end that i want to force a refresh just in case there has been an update to the users within the past hour and i do not want to wait 1 hour for the cache to empty. How can i force a cache refresh and query the users from the database rather than the cache without waiting for the cache to expire?
One thought i had was to pass an additional header and use Middleware to retrieve the uncached resource but i'm not sure on this idea.
Is there a better way of doing this?
Thanks

Laravel build a queue object using redis cache

I'm trying to figure something out, i built a logger that keeps track of certain events happening. These happen fairly often (ie 1-500 times a minute).
To optimize this properly, i'm storing to redis and then i have a task that grabs the queue object from redis, clears the cache key and will insert each individual entry into the db.
I have the enqueue happening in the destructor when my logger is done observing the data.
For obvious reasons, i dont want a db write happening every time, so to speed this up i write to redis and then flush to db on a task.
The issue is that my queue implementation is as follows:
fetch object with key xyz from redis
append new entry to object
store object with key xyz in redis
This is inefficient, i would like to be able to just enqueue straight into redis. Redis has a list type built-in which i could use, but laravel redis driver doesn't support it. I tried to figure out a way to send raw commands to redis from laravel, but I can't seem to make it work.
I was thinking of just storing keys into a tag in redis, but i quickly discovered laravel's implementation of tags is not 'proper' and will not allow fetching tagged items without a key, so i can't use tags as a queue and each key an object in it.
If anyone has any idea how i could either talk to redis directly and make use of list, or if there's something i missed, it would really help.
EDIT
While the way below does work, there is a more proper way of doing it using a facade for redis as a reply mentioned, more on it here: Documentation
Okay, if anyone runs into this. I did not see it documented properly anywhere, you need to do the following:
-> get an instance of redis through the Cache facade laravel provides.
$redis = Cache::getRedis();
-> call redis functions through it.
$redis->rpush('test.key', 'data');
I believe you'll want predis as your driver.
If you're building a log driver, you'll want these 2 functions implemented:
public function cacheEnqueue($data) : int
{
$size = $this->_redis->lpush($this->getCacheKey("c_stack_queue"), $data);
if ($size > 2000)
{
//prevent runaway logs
$this->_redis->ltrim($this->getCacheKey("c_stack_queue"), 0, 2000);
}
return $size;
}
/**
* Fetch items from stack. Multi-thread safe,
*
* #param int $number Fetch last x items.
*
* #return array
*/
public function cachePopMulti(int $number) : array
{
$data = $this->_redis->lrange($this->getCacheKey("c_stack_queue"), -$number, -1);
$this->_redis->ltrim($this->getCacheKey("c_stack_queue"), 0, -1 * ($number + 1));
return $data;
}
Of course, write your own key generator getCacheKey()

Laravel Scheduling in clustered environment

I am working with scheduling in Laravel 5.3. Previously, I was using one server to host the laravel application. Now that I am using two servers to run the Laravel App, how do I ensure that both servers are not running the same jobs at the same time?
Recently, I saw an Event method called "withoutOverlapping()". See https://laravel.com/docs/5.3/scheduling#preventing-task-overlaps
In my case, withoutOverlapping() cannot help me as I am working in a clustered environment.
Are there any workarounds or suggestions regarding this?
First of all, define if it is critical or not to avoid running task multiple times.
For example, if your app is using a task to do some sort of cleanup, there is almost no drawback to run it on every server (who care if you try to delete messages with +10 min twice?)
If it is absolutely critical to run every task only one time, you'll need to define a "main server" that will execute tasks, and a slave server that will just answer to requests but not perform any task. This is quite trivial as you just have to give every env a different name in your .env, and test against that when you define the scheduler tasks.
This is the easiest way, seriously don't bother making a database locking mecanism or whatever so you can synchronise tasks accross servers. Even OS's struggle to manage properly synchronisation against threads on the same machine, why do you want to implement the same accross different machines?
Here's what I've done when I ran into the same problems with load balancing:
class MutexCommand extends Command {
private $hash = null;
public function cleanup() {
if (is_string($this->hash)) {
Redis::del($this->hash);
$this->hash = null;
}
}
protected abstract function generateHash();
protected abstract function handleInternal();
public final function handle() {
register_shutdown_function([$this,"cleanup"]);
try {
$this->hash = $this->generateHash();
//Set a value if it does not exist atomically. Will fail if it does exist.
//Essentially setnx is the mechanism to acquire the lock
if (!Redis::setnx($this->hash,true)) {
$this->hash = null; //Prevent it from being cleaned up
throw new Exception("Already running");
}
$this->handleInternal();
} finally {
$this->cleanup();
}
}
}
Then you can write your commands:
class ThisShouldNotOverlap extends MutexCommand {
public function generateHash() {
return "Unique key for mutex, you can just use the class name if you want by doing return static::class";
}
public function handleInternal() { /* do stuff */ }
}
Then whenever you try to run the same command on multiple instances one would successfully acquire the "lock" and the others should fail.
Of course this assumes that you are using a non-clustered redis cache.
If you are not using redis then there's probably similar locking mechanisms you can implement in other caches, if you are using a clustered redis then you may need to use the RedLock locking mechanism
Essentially no, there's no a natural way using Laravel to know if another Laravel app have the same job on the job dispatcher.
We have some options there to find a solution:
Create a intermediate app that manages the jobs from the other apps.
Allow only one app to dispatch jobs.
Use worker queues, you have some packages for this, I would recommend to use Laravel 5 with WebSockets and Queue Asynchronously.
First of all Laravel scheduler isn't designed to work in a clustered environment. It was never intended to be that way.
I would suggest you should have a dedicated cron instance which manages your Laravel scheduler jobs.

store API response data on cache on server php

I am using an API. The resquest-response time on the API takes too long time. So, due to this problem, I want to store the response on the Server but not on the browser. This is because, I want to display the result from cache for similar searches. I know, there are limitations on the accuracy on cached data, but that's another part which I don't want to include here.
I am using laravel framework and using this code for current moment as in the laravel documentation.
$expiresAt = Carbon::now()->addMinutes(10);
Cache::put('key', 'value', $expiresAt);
The problem with this code is, it stores cache on the browser only. But I want to store it on Server. I have heard about the Memcached but could not implement it. I have also heard of apc_store() but I think it stores on local. So, How can I store cache on server?
Similar to Cache::put(), you can use Cache::pull() to check for data saved (on the server).
// Check cache for data
$cachedData = Cache::pull($key);
// Get new data if no cache
if (! $cachedData) {
$newData = 'New Data';
$expiresAt = Carbon::now()->addMinutes(10);
// Save new data to cache
$cachedData = Cache::put($key, $newData, $expiresAt);
}
echo $cachedData;

Meteor Session Replacement?

In the latest Meteor release (version 0.5.8), Session has been removed from the server-side code.
Previously I've used Session to store client-specific variables for the server; what is the replacement for this functionality?
Example case: User One opens a browser, User Two opens a browser. One calls a method on the server setting some token, the other calls a method on the server doing the same. I then need to access this when the client requests something. How do I differentiate between the two?
You'll want to save your tokens to a collection in the database.
You could use a Session on the server if you wanted to simply by copying the session package into your application's packages directory and changing its package.js to also load on the server. But a Session is an in-memory data structure, and so won't work if you have multiple server instances; and you wouldn't be able to restart the server without losing your user's tokens.
If you store your tokens in the database they'll persist across server restarts, and will work with a future version of Meteor which is able to scale an application by adding more server instances when needed.
If you need to expire your tokens (so that your collection doesn't grow without bound), you could add a "lastUsed" Date field to your token collection, and periodically remove tokens that haven't been used for longer than your chosen expiration period.
You can use each one's session id which is unique to the tab too. Not too sure how to get the current session id but it should be there somewhere (you can see it in Meteor.default_server.sessions, so there is still a way:
Client js
Meteor.call("test", Meteor.default_connection._lastSessionId, function(err,result) {
console.log(result);
});
Server side Js
Session = {
set : function(key, value, sessionid) {
console.log(Meteor.default_server.sessions[sessionid]);
if(!Meteor.default_server.sessions[sessionid].session_hash) Meteor.default_server.sessions[sessionid].session_hash = {};
Meteor.default_server.sessions[sessionid].session_hash.key = value;
},
get : function(key, sessionid) {
if(Meteor.default_server.sessions[sessionid].session_hash)
return Meteor.default_server.sessions[sessionid].session_hash.key;
},
equals: function(key, value, sessionid) {
return (this.get(key, sessionid) == value)
},
listAllSessionids: function() {
return _.pluck(Meteor.default_server.sessions, "id");
}
};
Meteor.methods({
test:function(sessionid) {
if(!Session.get("initial_load", sessionid)) Session.set("initial_load", new Date().getTime(), sessionid);
return Session.get("initial_load", sessionid);
}
});
I hook into Meteor.default_connection._sessions to store the values so that theres some type of garbage collection involved when the session isn't valid anymore (i.e the user has closed his tabs) to prevent memory being wasted. In livedata_server.js these old sessions get destroyed after 1 minute of no activity on the DDP wire (like the heartbeat).
Because the server can see everyone's session you can use the sessionid to access another user's session data. and listAllSessionids to give out an array of all the sessionids currently active.
Automatically set session like this.userId in a Method without using a param in a call
It looks like there is functionality for this this but its not fully hooked up. The session id would be stored in this.sessionData but its likely still unfinished. Its there to be called in method but theres nowhere that its being set yet (in livedata_connection.js & livedata_server.js)

Resources