reduce garbage collection time in codeigniter - codeigniter

Garbage collection function in CI
function _sess_gc()
{
if ($this->sess_use_database != TRUE)
{
return;
}
srand(time());
if ((rand() % 100) < $this->gc_probability)
{
$expire = $this->now - $this->sess_expiration;
$this->CI->db->where("last_activity < {$expire}");
$this->CI->db->delete($this- ;
log_message('debug', 'Session garbage collection performed.');
}
}
Im new in codeigniter and i have found this GC function in codeigniter which in the Session.php libraries folder.
My problem is
how to change the time for the GC to delete expired session?
Example: I want it to delete expired session every 3mins..
But this function will delete expired session from database if the probability percentage is met..
Is there's a way of doing it?

Related

How to prevent additional page requests after response sent

I have configured a listener on kernel.request which sets a new response with redirect when the session time has reached a certain value. The listener works fine and redirects to a certain page, on the next request, after the session has ended. But my problem is on the page I have many links and if I press multiple times the same link, the initial request with the redirect is cancelled/stopped and a new request is made with the last link pressed and so it passes my redirect even though the session has ended and is destroyed. So, my question is how to prevent additional requests/link presses after the firs request is made?
Here is my code:
public function onKernelRequestSession(GetResponseEvent $event)
{
$request = $event->getRequest();
$route = $request->get('_route');
$session = $request->getSession();
if ((false === strpos($route, '_wdt')) && ($route != null)) {
$session->start();
$time = time() - $session->getMetadataBag()->getCreated();
if ($route != 'main_route_for_idle_page') {
if (!$session->get("active") && $route == 'main_route_for_site_pages') {
$session->invalidate();
$session->set("active", "1");
} else {
if ($time >= $this->sessionTime) {
$session->clear();
$session->invalidate();
$event->setResponse(new RedirectResponse($this->router->generate('main_route_for_idle_page')));
}
}
} else {
if ($session->get("activ")) {
$session->clear();
$session->invalidate();
}
}
}
}
Thak you.
Idea #1: Simple incremental counter
Each request sends sequence number as param which is being verified as expected at the server.
Server increments the number and sends it back via response
the new number is used in future requests
Basically, if server expects the SEQUENCE number to be 2 and client sends 1 the request is to be rejected.
Idea #2: Unique hash each time
Similar to the idea above, but uses unique hashes to eliminate predictive nature of incremental sequence.
I resolved the issue using JQuery: when a link was pressed I disabled the other ones and so only one request is made from the page:
var isClicked = false;
$(".menu-link").click(function(e) {
if(!isClicked) {
isClicked = true;
} else {
e.preventDefault();
}
});
Thanks.

Destroying the session if the user is idle

My goal is to destroy the logged-in used session and force him to log in again if he was idle for 20 minutes. Here is my way of doing it:
In each controller, I do this check:
if(reach_idle_limit()) {
redirect('logout');
}
And reach_idle_limit() is a helper method in one of my helper classes:
function reach_idle_limit() {
$idle_period = 1200; //20 mins
$CI =& get_instance();
$last_activity = $CI->session->userdata('last_activity');
$now_time = time();
//If $last_activity is not set, don't force a logout
if($last_activity == False || $last_activity == 0){
return false;
}
//If idle period exceeded: destroy the session and return true
else if($now_time - $last_activity > $idle_period){
$CI->session->sess_destroy();
return true;
}
//else, update session's last_activity to current time, return false
else{
$CI->session->set_userdata('last_activity', $now_time);
return false;
}
}
This works fine when I give $idle_period a small value, like 60 sec. But when I give it the value I seek, 20 min, it doesn't work!
FYI:
I'm using ag_auth library with Codeigniter (for the authentication part).
My config's variable sess_expiration is set to 0.
In this case, why not use session.gc_maxlifetime?
session.gc_maxlifetime specifies the number of seconds after which data will be seen as 'garbage' and potentially cleaned up. Garbage collection may occur during session start (depending on session.gc_probability and session.gc_divisor).
just use:
ini_set('session.gc_maxlifetime',20);

How can I solve the warning "Warning: array_key_exists."?

I'm using Hybridauth social login, and upon a user authenticating with Facebook, I receive the following error:
Warning: array_key_exists() [function.array-key-exists]: The second
argument should be either an array or an object in
/hybridauth/Hybrid/thirdparty/Facebook/base_facebook.php on line 1328
My guess (probably wrong) to why this may be happening is because the parameters used to pass to Hybridauth come from the browser URL, and I have two - page=register & connected_with=facebook. Hybridauth only requires the second one...
It actually authenticates, but I want rid of this error. Why does this warning occur? Is there a way to hide it?
This is the bit that errors:
/**
* Get the base domain used for the cookie.
*/
protected function getBaseDomain() {
// The base domain is stored in the metadata cookie
// if not we fallback to the current hostname
$metadata = $this->getMetadataCookie();
if (array_key_exists('base_domain', $metadata) &&
!empty($metadata['base_domain'])) {
return trim($metadata['base_domain'], '.');
}
return $this->getHttpHost();
}
It's this code the warning comes from:
/**
* Destroy the current session
*/
public function destroySession() {
$this->accessToken = null;
$this->signedRequest = null;
$this->user = null;
$this->clearAllPersistentData();
// JavaScript sets a cookie that will be used in getSignedRequest
// that we need to clear if we can
$cookie_name = $this->getSignedRequestCookieName();
if (array_key_exists($cookie_name, $_COOKIE)) {
unset($_COOKIE[$cookie_name]);
if (!headers_sent()) {
$base_domain = $this->getBaseDomain();
setcookie($cookie_name, '', 1, '/', '.'.$base_domain);
} else {
// #codeCoverageIgnoreStart
self::errorLog(
'There exists a cookie that we wanted to clear that we couldn\'t '.
'clear because headers was already sent. Make sure to do the first '.
'API call before outputting anything.'
);
// #codeCoverageIgnoreEnd
}
}
}
It looks like getMetadataCookie() does not always return an array, possibly because the cookie has not yet been set. You may want to check that it's actually an array before using it as such;
if (is_array($metadata) && array_key_exists('base_domain', $metadata) &&
For the added code, the same would apply to array_key_exists() in the new code. If you're unsure if it's actually set to an array if the cookie is not set, check first.

Search as you type lag caused by slow DB in asp.net MVC with entity framework

I'm implementing search as you type functionality for a website with mvc.
My controller accept a Searchstring parameter and returns the records accordingly.
What I do is calling my controller into a div using this javascript :
$("#SearchField").keyup(function (event) {
$.ajax({
cache: false,
url: 'FolderList?searchString=' + $("#SearchField").val(),
success: function (data) {
$('#FolderList').empty().html(data);
}
});
});
Here is my controller code :
public ActionResult Search(string searchString, int? page)
{
var folders = db.Folders.AsQueryable();
if (!String.IsNullOrEmpty(searchString))
{
folders = folders.Where(p => p.SearchField.ToLower().Contains(searchString.ToLower()));
}
int pageSize = Convert.ToInt32(System.Configuration.ConfigurationManager.AppSettings["FolderPageSize"]);
int pageNumber = (page ?? 1);
return View(folders.ToPagedList(pageNumber, pageSize));
}
The problem is that the database entity that I request inside my controller (Folders) load very slowly (database issue). It makes the search as you type functionality lag a lot and it's not usable at all. That's an issue that i cannot solve at the source I must deal with it.
So my question: Is there a way to load the folder database entity in memory so there would be a long time to load the page but after every call to the controller would be quick. Or Am I wrong in my way to achieve this ? Is there another way ?
If folders does not contain a huge amount of data, it is entirely reasonable to load all of the rows at application startup and store them in Cache, e.g.
Cache["folders"] = (from f in folders select f).ToList();
Then, replace
folders = folders.Where(p => p.SearchField.ToLower().Contains(searchString.ToLower()));
with
folders = Cache["folders"].Where(p => p.SearchField.ToLower().Contains(searchString.ToLower()));
Keep in mind that the cache will empty if the application domain recycles. If it takes a very long time to read all necessary rows to fill the cache, you may want to plan on pre-warming the cache on app domain startup / recycle.

PHP Redis Session not saving

EDIT
I tried debugging this with xdebug and netbeans. It's weird that the exports will work during the debug session if I put in some breakpoints. However, with no break points, a more realistic environment, the exports don't work.
I've tried adding sleeps into some parts of the code.
I think that maybe PHP is ending before the Redis commit is completed. Maybe the Redis connections are being done asynchronously, but I checked PRedis and the default is a synchronous connection.
I am working on a reporting tool.
Here is the basic issue.
We store a report into the session object but on later requests when we try to get to the report in the session object it's gone.
Here is a more detailed version.
I store a 'report' object into the session like so
$_SESSION['report_name_unixtimestamp'] = gzcompress( serialize( $reportObject ) );
The user sees the report in some table form and then if they want they can export it. The report could change so the idea behind storing it in the session like this is that when the user exports it to PDF, Excel, etc, they'll be getting a report identical to the one they are viewing.
The user clicks on an export button and on the PHP side it will go into the session, fetch the report via the key provided as a get parameter (uncompresses and unserializes it), create the export and send it to the user for download.
This has worked well up until the point that we tried to introduce the Redis caching server as a tool for better session management.
What happens now is the following:
The first time we run the report it will get stored into the cache and the export will work successfully.
We will run the report again, with the same user account in the same session. This changes the unixtimestamp and so there should be two entries in the $_SESSION. ( $_SESSION['report_name_oldertimetamp'] and $_SESSION['report_name_newertimestamp'] ). When we click on the export button again we get an error saying that the file doesn't exist ( because it hasn't been sent by the server ).
If we check the redis server for the newer version of the report it isn't there, but the old timestamp is still there.
Now, this worked with the file session management but not with Redis. we've tried the redis module for php as well as the pure php client Predis.
Does anyone have any ideas?
Here are a few more details :
Redis has NOT run out of memory. We've checked this many times.
We already know that to unserialize the report object in the session the report class has to be included already. ( remember, the first export works fine but anything after that fails )
If we check the php session object during the request that the report is running on, it WILL contain the newer report but it never makes it to Redis.
Below is the save handler that is being used with Predis.
The redis_session_init is the function I call right before session_start() so that it gets registered. I'm not sure how the redis_session_write function works though so maybe someone can help me with that.
<?php
namespace RedisSession
{
$redisTargetPrefix = "PHPREDIS_SESSION:";
$unpackItems = array( );
$redisServer = "tcp://cache.emcweb.com";
function redis_session_init( $unpack = null, $server = null, $prefix = null )
{
global $unpackItems, $redisServer, $redisTargetPrefix;
if( $unpack !== null )
{
$unpackItems = $unpack;
}
if( $server !== null )
{
$redisServer = $server;
}
if( $prefix !== null )
{
$redisTargetPrefix = $prefix;
}
session_set_save_handler( 'RedisSession\redis_session_open', 'RedisSession\redis_session_close', 'RedisSession\redis_session_read', 'RedisSession\redis_session_write', 'RedisSession\redis_session_destroy', 'RedisSession\redis_session_gc' );
}
function redis_session_read( $id )
{
global $redisServer, $redisTargetPrefix;
$redisConnection = new \Predis\Client( $redisServer );
return base64_decode( $redisConnection->get( $redisTargetPrefix . $id ) );
}
function redis_session_write( $id, $data )
{
global $unpackItems, $redisServer, $redisTargetPrefix;
$redisConnection = new \Predis\Client( $redisServer );
$ttl = ini_get( "session.gc_maxlifetime" );
$redisConnection->pipeline( function ($r) use (&$id, &$data, &$redisTargetPrefix, &$ttl, &$unpackItems)
{
$r->setex( $redisTargetPrefix . $id, $ttl, base64_encode( $data ) );
foreach( $unpackItems as $item )
{
$keyname = $redisTargetPrefix . $id . ":" . $item;
if( isset( $_SESSION[ $item ] ) )
{
$r->setex( $keyname, $ttl, $_SESSION[ $item ] );
}
else
{
$r->del( $keyname );
}
}
} );
}
function redis_session_destroy( $id )
{
global $redisServer, $redisTargetPrefix;
$redisConnection = new \Predis\Client( $redisServer );
$redisConnection->del( $redisTargetPrefix . $id );
$unpacked = $redisConnection->keys( $redisTargetPrefix . $id . ":*" );
foreach( $unpacked as $unp )
{
$redisConnection->del( $unp );
}
}
// These functions are all noops for various reasons... opening has no practical meaning in
// terms of non-shared Redis connections, the same for closing. Garbage collection is handled by
// Redis anyway.
function redis_session_open( $path, $name )
{
}
function redis_session_close()
{
}
function redis_session_gc( $age )
{
}
}
The issue was solved and it was much dumber than I thought.
The save handler doesn't implement locking in any way. On the report pages there are multiple requests being made to the server via ajax and the like. One of the ajax requests starts before the report gets saved to session space. Thus, it reads the session, then writes the session at the end.
Since the reports executes faster every time, the report would get cached to the session in Redis but would then be overwritten by the other script that had an older version of the sessien.
I had help from one of my co-workers. Ugh! This was a headache I'm glad to be over.

Resources