Laravel session suddenly not being saved - laravel

Yesterday it was working fine, but somehow today it's not saving the array to the session.
if(Session::get('header')) {
$array['header'] = Session::get('header');
} else {
$header = $this->loadElements(2, false);
$array['header'] = $header;
Session::put('header', $header);
}
When I look in the storage\framework\sessions folder, the header is in the latest session file.
But what looks very weird to me, is that when having the browser window still open, on each page reload a new session file is created, which is very odd to me.
I did some further testing and it has nothing to do with my code, but all with the recreation of sessions after each page load. In another project it works fine, so it must have to do with the recreation.
Why is Laravel all of a sudden not reading the session file and thus recreating it?
In config\session.php I changed 'driver' => env('SESSION_DRIVER', 'file') to database and back, and then it works again.
Smh

Rewrite your code to use Session::has in your condition rather than get.
if(Session::has('header')) {
$array['header'] = Session::get('header');
} else {
$header = $this->loadElements(2, false);
$array['header'] = $header;
Session::put('header', $header);
}

Related

TYPO3 9.5 Extbase plugin cache implementation

I am trying to get a cache working in my plugin.
In ext_localconf.php
if (!is_array($GLOBALS['TYPO3_CONF_VARS']['SYS']['caching']['cacheConfigurations']['myextension'])) {
$GLOBALS['TYPO3_CONF_VARS']['SYS']['caching']['cacheConfigurations']['myextension'] = [];}
if (!isset($GLOBALS['TYPO3_CONF_VARS']['SYS']['caching']['cacheConfigurations']['myextension']['frontend'])) {
$GLOBALS['TYPO3_CONF_VARS']['SYS']['caching']['cacheConfigurations']['myextension']['frontend'] = 'TYPO3\\CMS\\Core\\Cache\\Frontend\\StringFrontend';}
if (!isset($GLOBALS['TYPO3_CONF_VARS']['SYS']['caching']['cacheConfigurations']['myextension']['options'])) {
$GLOBALS['TYPO3_CONF_VARS']['SYS']['caching']['cacheConfigurations']['myextension']['options'] = ['defaultLifetime' => 0];}
if (!isset($GLOBALS['TYPO3_CONF_VARS']['SYS']['caching']['cacheConfigurations']['myextension']['groups'])) {
$GLOBALS['TYPO3_CONF_VARS']['SYS']['caching']['cacheConfigurations']['myextension']['groups'] = ['pages'];}
In my controller action :
$cacheIdentifier = 'topic' . $topic->getUid();
$cache = GeneralUtility::makeInstance(CacheManager::class)->getCache('myextension');
$result = unserialize($cache->get($cacheIdentifier));
if ($result !== false ) {
DebuggerUtility::var_dump($result);
} else {
$result = $this->postRepository->findByTopic($topic->getUid(), $page, $itemsPerPage);
$cache->set($cacheIdentifier, serialize($result), ['tag1', 'tag2']);
DebuggerUtility::var_dump($result);
}
The first time the page with the action gets loaded all is ok and the entry has been made in de database (cf_myextension and cf_myextension_tags}.
But the 2nd time the cache gets loaded and I get an error. Even DebuggerUtility::var_dump($result); does not work:
Call to a member function map() on null
in ../typo3/sysext/extbase/Classes/Persistence/Generic/QueryResult.php line 96
*/
protected function initialize()
{
if (!is_array($this->queryResult)) {
$this->queryResult = $this->dataMapper->map($this->query->getType(), $this->persistenceManager->getObjectDataByQuery($this->query));
}
}
/**
A normal var_dump works and spits out the cache entry. What is the problem? Do I forget something? Can't a QueryResult together with some other variables not be stored as an array in the cache? I also tried VariableFrontend cache, which produced the same error.
The E-tools GUI application indents and pretty formats HTML, JavaScript, JSON and SQL. To install the e-tools snap package in all currently supported versions of Ubuntu open the terminal and type:
sudo snap install e-tools

Laravel Collective SSH results

I am performing SSH in Laravel whereby I connect to another server and download a file. I am using Laravel Collective https://laravelcollective.com/docs/5.4/ssh
So, the suggested way to do this is something like this
$result = \SSH::into('scripts')->get('/srv/somelocation/'.$fileName, $path);
if($result) {
return $path;
} else {
return 401;
}
Now that successfully downloads the file and moves it to my local server. However, I am always returned 401 because $result seems to be Null.
I cant find much or getting the result back from the SSH. I have also tried
$result = \SSH::into('scripts')->get('/srv/somelocation/'.$fileName, $path, function($line){
dd( $line.PHP_EOL);
});
But that never gets into the inner function.
Is there any way I can get the result back from the SSH? I just want to handle it properly if there is an error.
Thanks
Rather than rely on $result to give you true / false / error, you can check if the file was downloaded successfully in another way:
// download the file
$result = \SSH::into('scripts')->get('/srv/somelocation/'.$fileName, $path);
// see if downloaded file exists
if ( file_exists($path) ) {
return $path;
} else {
return 401;
}
u need to pass file name also like this in get and put method:
$fileName = "example.txt";
$get = \SSH::into('scripts')->get('/remote/somelocation/'.$fileName, base_path($fileName));
in set method
$set = \SSH::into('scripts')->set(base_path($fileName),'/remote/location/'.$fileName);
in list
$command = SSH::into('scripts')->run(['ls -lsa'],function($output) {
dd($output);
});

Writing Logs from Console Shell

I have been using CakePHP 2.4.4 to build the interactive web part of my app and that is going very well. CakePHP is awesome.
I am now doing some supporting background processes. The Console and Shell seems to be the way to do it as it has access to the Models.
I have written the code and have it working but I am trying to write to the same log that I use for the Models. In the models I have an afterSave function to log all the database changes and I just used the $this->log("$model $logEntry", 'info'); to write to the log.
That doesn't work in the Shell but I thought the CakeLog::write('info', "$model $logEntry"); might work but it doesn't either.
Do I need to initialise the CakeLog to point to the correct log files?
<?php
App::uses('CakeTime', 'Utility');
App::uses('CakeLog', 'Utility');
class ProcessRequestShell extends AppShell {
//Need to access the request and monitor tables
public $uses = array('Request');
private function updateRequest($data){
$model = 'Request';
$result = $this->Request->save($data);
$logEntry = "UPDATE ProcessRequestShell ";
foreach ($data[$model] AS $k => $v){$logEntry .= "$k='$v' ";}
if ($result){
//$this->log("$model $logEntry", 'info');
CakeLog::write('info', "$model $logEntry");
} else {
//$this->log("$model FAILED $logEntry", 'error');
CakeLog::write('error', "$model FAILED $logEntry");
}
return($result);
}
public function main() {
$options = array('conditions' => array('state' => 0, 'next_state' => 1));
$this->Request->recursive = 0;
$requests = $this->Request->find('all', $options);
//See if the apply_changes_on date/time is past
foreach ($requests AS $request){
$this->out("Updating request ".$request['Request']['id'], 1, Shell::NORMAL);
//Update the next_state to "ready"
$request['Request']['state'] = 1;
$request['Request']['next_state'] = 2;
$this->updateRequest($request);
}
}
}
?>
Did you set up a default listener/scope for those log types?
If not, they will not get logged.
// Setup a 'default' cache configuration for use in the application.
Cache::config('default', array('engine' => 'File'));
In your bootstrap.php for example
See http://book.cakephp.org/2.0/en/appendices/2-2-migration-guide.html#log
You need to setup default log stream writing to file, eventually, in app/Config/bootstrap.php.
CakeLog does not auto-configure itself anymore. As a result log files
will not be auto-created anymore if no stream is listening. Make sure
you got at least one default stream set up, if you want to listen to
all types and levels. Usually, you can just set the core FileLog class
to output into app/tmp/logs/:
CakeLog::config('default', array(
'engine' => 'File'
));
See Logging → Writing to logs section of the CookBook 2.x

Codeigniter Internationalization i18n trailing slash not added

I have a project going on, and now [admin section is almost done] (I know bit late) I am trying to implement i18n to the project.
I think everything works fine except when I enter http://localhost/my_project while my_project is working directory with CI installation in it, I am redirected to the following http://localhost/my_project/enhome (no trailing slash after en) any ideas?
Expecting result http://localhost/my_project/en/home, not just home controller but all controllers are behaving same.
.htaccess, base_url and index_page are set right (all works without the i18n).
routes.php are out of stock
$route['default_controller'] = "home";
$route['404_override'] = '';
// URI like '/en/about' -> use controller 'about'
$route['^(en|de|fr|nl)/(.+)$'] = "$2";
// '/en', '/de', '/fr' and '/nl' URIs -> use default controller
$route['^(en|de|fr|nl)$'] = $route['default_controller'];
edit
I am using "new" i18n from I guess ellislab. Sorry for wrong link (for 1.7 CI version).
So after some digging, I found the problem in core file, on the bottom of this posted script there is variable $new_url and it was echoing Redirect triggered to http://192.168.1.184/my_project/enhome without the "fix". I just added the / in there and it works prefectly I wonder what happend here and if this is OK fix or not.
function MY_Lang() {
parent::__construct();
global $CFG;
global $URI;
global $RTR;
$this->uri = $URI->uri_string();
$this->default_uri = $RTR->default_controller;
$uri_segment = $this->get_uri_lang($this->uri);
$this->lang_code = $uri_segment['lang'] ;
$url_ok = false;
if ((!empty($this->lang_code)) && (array_key_exists($this->lang_code, $this->languages)))
{
$language = $this->languages[$this->lang_code];
$CFG->set_item('language', $language);
$url_ok = true;
}
if ((!$url_ok) && (!$this->is_special($uri_segment['parts'][0]))) // special URI -> no redirect
{
// set default language
$CFG->set_item('language', $this->languages[$this->default_lang()]);
$uri = (!empty($this->uri)) ? $this->uri: $this->default_uri;
//OPB - modification to use i18n also without changing the .htaccess (without pretty url)
$index_url = empty($CFG->config['index_page']) ? '' : $CFG->config['index_page']."/";
$new_url = $CFG->config['base_url'].$index_url.$this->default_lang()."/".$uri;
echo "Redirect triggered to ".$new_url;
//header("Location: " . $new_url, TRUE, 302);
// exit;
}
}
this library is little old never been updated author since long time and it works for CI 1.7 but here is updated version compatible with CI 2.1.* same library but updated version working same way you can download from here
CodeIgniter 2.1 internationalization i18n

PHP Redis Session not saving

EDIT
I tried debugging this with xdebug and netbeans. It's weird that the exports will work during the debug session if I put in some breakpoints. However, with no break points, a more realistic environment, the exports don't work.
I've tried adding sleeps into some parts of the code.
I think that maybe PHP is ending before the Redis commit is completed. Maybe the Redis connections are being done asynchronously, but I checked PRedis and the default is a synchronous connection.
I am working on a reporting tool.
Here is the basic issue.
We store a report into the session object but on later requests when we try to get to the report in the session object it's gone.
Here is a more detailed version.
I store a 'report' object into the session like so
$_SESSION['report_name_unixtimestamp'] = gzcompress( serialize( $reportObject ) );
The user sees the report in some table form and then if they want they can export it. The report could change so the idea behind storing it in the session like this is that when the user exports it to PDF, Excel, etc, they'll be getting a report identical to the one they are viewing.
The user clicks on an export button and on the PHP side it will go into the session, fetch the report via the key provided as a get parameter (uncompresses and unserializes it), create the export and send it to the user for download.
This has worked well up until the point that we tried to introduce the Redis caching server as a tool for better session management.
What happens now is the following:
The first time we run the report it will get stored into the cache and the export will work successfully.
We will run the report again, with the same user account in the same session. This changes the unixtimestamp and so there should be two entries in the $_SESSION. ( $_SESSION['report_name_oldertimetamp'] and $_SESSION['report_name_newertimestamp'] ). When we click on the export button again we get an error saying that the file doesn't exist ( because it hasn't been sent by the server ).
If we check the redis server for the newer version of the report it isn't there, but the old timestamp is still there.
Now, this worked with the file session management but not with Redis. we've tried the redis module for php as well as the pure php client Predis.
Does anyone have any ideas?
Here are a few more details :
Redis has NOT run out of memory. We've checked this many times.
We already know that to unserialize the report object in the session the report class has to be included already. ( remember, the first export works fine but anything after that fails )
If we check the php session object during the request that the report is running on, it WILL contain the newer report but it never makes it to Redis.
Below is the save handler that is being used with Predis.
The redis_session_init is the function I call right before session_start() so that it gets registered. I'm not sure how the redis_session_write function works though so maybe someone can help me with that.
<?php
namespace RedisSession
{
$redisTargetPrefix = "PHPREDIS_SESSION:";
$unpackItems = array( );
$redisServer = "tcp://cache.emcweb.com";
function redis_session_init( $unpack = null, $server = null, $prefix = null )
{
global $unpackItems, $redisServer, $redisTargetPrefix;
if( $unpack !== null )
{
$unpackItems = $unpack;
}
if( $server !== null )
{
$redisServer = $server;
}
if( $prefix !== null )
{
$redisTargetPrefix = $prefix;
}
session_set_save_handler( 'RedisSession\redis_session_open', 'RedisSession\redis_session_close', 'RedisSession\redis_session_read', 'RedisSession\redis_session_write', 'RedisSession\redis_session_destroy', 'RedisSession\redis_session_gc' );
}
function redis_session_read( $id )
{
global $redisServer, $redisTargetPrefix;
$redisConnection = new \Predis\Client( $redisServer );
return base64_decode( $redisConnection->get( $redisTargetPrefix . $id ) );
}
function redis_session_write( $id, $data )
{
global $unpackItems, $redisServer, $redisTargetPrefix;
$redisConnection = new \Predis\Client( $redisServer );
$ttl = ini_get( "session.gc_maxlifetime" );
$redisConnection->pipeline( function ($r) use (&$id, &$data, &$redisTargetPrefix, &$ttl, &$unpackItems)
{
$r->setex( $redisTargetPrefix . $id, $ttl, base64_encode( $data ) );
foreach( $unpackItems as $item )
{
$keyname = $redisTargetPrefix . $id . ":" . $item;
if( isset( $_SESSION[ $item ] ) )
{
$r->setex( $keyname, $ttl, $_SESSION[ $item ] );
}
else
{
$r->del( $keyname );
}
}
} );
}
function redis_session_destroy( $id )
{
global $redisServer, $redisTargetPrefix;
$redisConnection = new \Predis\Client( $redisServer );
$redisConnection->del( $redisTargetPrefix . $id );
$unpacked = $redisConnection->keys( $redisTargetPrefix . $id . ":*" );
foreach( $unpacked as $unp )
{
$redisConnection->del( $unp );
}
}
// These functions are all noops for various reasons... opening has no practical meaning in
// terms of non-shared Redis connections, the same for closing. Garbage collection is handled by
// Redis anyway.
function redis_session_open( $path, $name )
{
}
function redis_session_close()
{
}
function redis_session_gc( $age )
{
}
}
The issue was solved and it was much dumber than I thought.
The save handler doesn't implement locking in any way. On the report pages there are multiple requests being made to the server via ajax and the like. One of the ajax requests starts before the report gets saved to session space. Thus, it reads the session, then writes the session at the end.
Since the reports executes faster every time, the report would get cached to the session in Redis but would then be overwritten by the other script that had an older version of the sessien.
I had help from one of my co-workers. Ugh! This was a headache I'm glad to be over.

Resources