I have been using CakePHP 2.4.4 to build the interactive web part of my app and that is going very well. CakePHP is awesome.
I am now doing some supporting background processes. The Console and Shell seems to be the way to do it as it has access to the Models.
I have written the code and have it working but I am trying to write to the same log that I use for the Models. In the models I have an afterSave function to log all the database changes and I just used the $this->log("$model $logEntry", 'info'); to write to the log.
That doesn't work in the Shell but I thought the CakeLog::write('info', "$model $logEntry"); might work but it doesn't either.
Do I need to initialise the CakeLog to point to the correct log files?
<?php
App::uses('CakeTime', 'Utility');
App::uses('CakeLog', 'Utility');
class ProcessRequestShell extends AppShell {
//Need to access the request and monitor tables
public $uses = array('Request');
private function updateRequest($data){
$model = 'Request';
$result = $this->Request->save($data);
$logEntry = "UPDATE ProcessRequestShell ";
foreach ($data[$model] AS $k => $v){$logEntry .= "$k='$v' ";}
if ($result){
//$this->log("$model $logEntry", 'info');
CakeLog::write('info', "$model $logEntry");
} else {
//$this->log("$model FAILED $logEntry", 'error');
CakeLog::write('error', "$model FAILED $logEntry");
}
return($result);
}
public function main() {
$options = array('conditions' => array('state' => 0, 'next_state' => 1));
$this->Request->recursive = 0;
$requests = $this->Request->find('all', $options);
//See if the apply_changes_on date/time is past
foreach ($requests AS $request){
$this->out("Updating request ".$request['Request']['id'], 1, Shell::NORMAL);
//Update the next_state to "ready"
$request['Request']['state'] = 1;
$request['Request']['next_state'] = 2;
$this->updateRequest($request);
}
}
}
?>
Did you set up a default listener/scope for those log types?
If not, they will not get logged.
// Setup a 'default' cache configuration for use in the application.
Cache::config('default', array('engine' => 'File'));
In your bootstrap.php for example
See http://book.cakephp.org/2.0/en/appendices/2-2-migration-guide.html#log
You need to setup default log stream writing to file, eventually, in app/Config/bootstrap.php.
CakeLog does not auto-configure itself anymore. As a result log files
will not be auto-created anymore if no stream is listening. Make sure
you got at least one default stream set up, if you want to listen to
all types and levels. Usually, you can just set the core FileLog class
to output into app/tmp/logs/:
CakeLog::config('default', array(
'engine' => 'File'
));
See Logging → Writing to logs section of the CookBook 2.x
Related
I am trying to get a cache working in my plugin.
In ext_localconf.php
if (!is_array($GLOBALS['TYPO3_CONF_VARS']['SYS']['caching']['cacheConfigurations']['myextension'])) {
$GLOBALS['TYPO3_CONF_VARS']['SYS']['caching']['cacheConfigurations']['myextension'] = [];}
if (!isset($GLOBALS['TYPO3_CONF_VARS']['SYS']['caching']['cacheConfigurations']['myextension']['frontend'])) {
$GLOBALS['TYPO3_CONF_VARS']['SYS']['caching']['cacheConfigurations']['myextension']['frontend'] = 'TYPO3\\CMS\\Core\\Cache\\Frontend\\StringFrontend';}
if (!isset($GLOBALS['TYPO3_CONF_VARS']['SYS']['caching']['cacheConfigurations']['myextension']['options'])) {
$GLOBALS['TYPO3_CONF_VARS']['SYS']['caching']['cacheConfigurations']['myextension']['options'] = ['defaultLifetime' => 0];}
if (!isset($GLOBALS['TYPO3_CONF_VARS']['SYS']['caching']['cacheConfigurations']['myextension']['groups'])) {
$GLOBALS['TYPO3_CONF_VARS']['SYS']['caching']['cacheConfigurations']['myextension']['groups'] = ['pages'];}
In my controller action :
$cacheIdentifier = 'topic' . $topic->getUid();
$cache = GeneralUtility::makeInstance(CacheManager::class)->getCache('myextension');
$result = unserialize($cache->get($cacheIdentifier));
if ($result !== false ) {
DebuggerUtility::var_dump($result);
} else {
$result = $this->postRepository->findByTopic($topic->getUid(), $page, $itemsPerPage);
$cache->set($cacheIdentifier, serialize($result), ['tag1', 'tag2']);
DebuggerUtility::var_dump($result);
}
The first time the page with the action gets loaded all is ok and the entry has been made in de database (cf_myextension and cf_myextension_tags}.
But the 2nd time the cache gets loaded and I get an error. Even DebuggerUtility::var_dump($result); does not work:
Call to a member function map() on null
in ../typo3/sysext/extbase/Classes/Persistence/Generic/QueryResult.php line 96
*/
protected function initialize()
{
if (!is_array($this->queryResult)) {
$this->queryResult = $this->dataMapper->map($this->query->getType(), $this->persistenceManager->getObjectDataByQuery($this->query));
}
}
/**
A normal var_dump works and spits out the cache entry. What is the problem? Do I forget something? Can't a QueryResult together with some other variables not be stored as an array in the cache? I also tried VariableFrontend cache, which produced the same error.
The E-tools GUI application indents and pretty formats HTML, JavaScript, JSON and SQL. To install the e-tools snap package in all currently supported versions of Ubuntu open the terminal and type:
sudo snap install e-tools
Before asking this question, I did my best by reading severel
questions on SO (tagged Ratchet and dealing with similar issues but to
no avail. I even asked a question which received no attention and I
therefore deleted it to write another one (that hopefully is more
clear).
My final goal is to build a one-to-one private chat application using Ratchet. Everything is working fine except that I can't send message to a specific user.
Every logged in user connects to the websocket server while accessing secured area of website:
$(document).ready(function() {
var conn = new WebSocket('ws://localhost:8080');
conn.onopen = function(e) {
console.log("Connection established!");
// Here I need to send the logged in user_id to websocket server
// and get it in onOpen method so that I can index my array
// of connections with user_id instead of
//$connection->ResourceId, I explain more below
};
conn.onmessage = function(e) {
console.log(e.data);
};
});
When a user writes a message in the chat box, the message is sent via AJAX to web server then pushed to Websocket using ZeroMQ. In the controller:
// Persistence of Message(message_id, sender_id, receiver_id, message_text)
.....
$context = new \ZMQContext();
$socket = $context->getSocket(\ZMQ::SOCKET_PUSH, 'my pusher');
$socket->connect("tcp://localhost:5555");
$pushData = array(
'receiver_id' => $receiver_id,
'sender_id' => $user->getId(),
'message' => $message->getMessageText(),
);
$socket->send(json_encode($pushData));
So at the end, my websocket server is able to know which is the id of receiver using the JSON. But how will he know which is the connection of that user? In other words, I need to store websocket connections in an array that is indexed by the user id.
<?php
namespace RealTime;
use Ratchet\MessageComponentInterface;
use Ratchet\ConnectionInterface;
use Ratchet\Wamp\WampServerInterface;
class Pusher implements WampServerInterface, MessageComponentInterface{
private $clients;
public function onOpen(ConnectionInterface $conn) {
$this->clients[$conn->resourceId] = $conn;
// I need here to get the user_id received from browser while opening connection
}
public function onMessageEntry($entry) {
$entryData = json_decode($entry, true);
//This is not what I need (It sends to all users in array)
foreach ($this->clients as $key => $client) {
$client->send($entryData['message']);
}
}
public function onMessage(ConnectionInterface $from, $msg) {
echo $msg;
}
}
And the websocket server:
<?php
require dirname(__DIR__) . '/vendor/autoload.php';
use RealTime\Pusher;
$loop = React\EventLoop\Factory::create();
$pusher = new Pusher;
$context = new React\ZMQ\Context($loop);
$pull = $context->getSocket(ZMQ::SOCKET_PULL);
$pull->bind('tcp://127.0.0.1:5555');
$pull->on('message', array($pusher, 'onMessageEntry'));
$webSock = new React\Socket\Server($loop);
$webSock->listen(8080, '0.0.0.0');
$webServer = new Ratchet\Server\IoServer(
new Ratchet\Http\HttpServer(
new Ratchet\WebSocket\WsServer(
new Ratchet\Wamp\WampServer(
$pusher
)
)
),
$webSock
);
$loop->run();
?>
Questions:
How to send the logged in user_id from client side while opening connection.I need to have the value in websocket server so that I can index my array of clients with it ($client[user_id]=$conn instead of $client[recourceId]=$conn). I tried the javascript function send but I don't know where to receive the sent data (even onMessage is not printing anything).
Why the onMessage method is not executing even MessageComponentInterface implemented (Is it because I have onMessageEntry method + $pull->on('message', array($pusher, 'onMessageEntry')); line of code?
Thank you.
Actually on my last try, I gave up on PHP WebSocket (it was so complicated to make this work) and started using SocketIO with nodeJS that solved my entire problem and could give me a functionnal simple Chat system.
This is what I found and any suggestions to enhance this solution are welcome.
One can use the Ratchet SessionProvider. This will require using one of the Symfony Custom Session handlers as indicated. I use in the following code the PdoSessionHandler.
<?php
require dirname(__DIR__) . '/vendor/autoload.php';
use YourDirectory\Pusher;
use Symfony\Component\HttpFoundation\Session\Storage\Handler;
use \Ratchet\Session\SessionProvider;
$pusher = new Pusher;
$pdo = new PDO('mysql:host=localhost;dbname=community', 'root', null);
$pdo->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
//This info is related to you db
$dbOptions = array(
'db_table' => 'session',
'db_id_col' => 'sess_id',
'db_data_col' => 'sess_data',
'db_time_col' => 'sess_time',);
$loop = \React\EventLoop\Factory::create();
$context = new \React\ZMQ\Context($loop);
$pull = $context->getSocket(\ZMQ::SOCKET_PULL);
$pull->bind('tcp://127.0.0.1:5555');
$pull->on('message', array($pusher, 'onMessageEntry'));
$webSock = new React\Socket\Server($loop);
$webSock->listen(8080, '0.0.0.0'); // Binding to 0.0.0.0 means remotes can connect
$webServer = new Ratchet\Server\IoServer(
new Ratchet\Http\HttpServer(
new Ratchet\WebSocket\WsServer(
new SessionProvider(
new Ratchet\Wamp\WampServer(
$pusher
),new Handler\PdoSessionHandler($pdo,$dbOptions)
)
)
),
$webSock
);
$loop->run();
?>
Then my stub class will become:
public function onOpen(ConnectionInterface $conn) {
$this->clients[$conn->Session->get('current_user_id')] = $conn;
}
public function onMessageEntry($entry) {
$entryData = json_decode($entry, true);
$ReceiverConnection=$this->clients[$entryData['receiver_id']];
$ReceiverConnection->send($entryData['message']);
}
But before, I have added the user id to the session in Web server (in the controller that returns the initial page)
$user = $this->getUser();
$request->getSession()->set('current_user_id', $user->getId());
PS:
Moving to PdoSessionHandler can be done by implementing this (Symfony).
I still can't answer 2 but all the logic that can be put onMessage is now moved to onMessageEntry which satisfies temporarly the needs.
As an alternative way to make the association between your clientConnection and his ID you need to send a message using websockets just after opening the connection to your websocket server this message will contain your user id you will use it to index his connection object by his ID in your array.
For the second question as I know the default websocket implementation is not working properly specialy with pubsub protocol you need to use a websocket library for that I suggest to use AutobahnJS it's a good websocket library with a lot of wonderfull features.
My version of YII: 1.1.12... scratch that, I upgraded to version 1.1.13, and still doesn't work.
I tried this:
Yii::app()->cache->set('someKey', $auctions);
$data = Yii::app()->cache->get('someKey');
print_r($data);
And I see the data that I stored! However, if I try this:
Yii::app()->cache->set('someKey', $auctions, 10);
$data = Yii::app()->cache->get('someKey');
print_r( $data );
I see nothing? Why is YII ignoring my time interval? What am I missing?
** EDIT **
My caching is defined in config as:
'cache'=>array(
'class'=>'system.caching.CMemCache',
'useMemcached'=>false,
'servers'=>array(
array( 'host'=>'127.0.0.1', 'port'=> 11211, 'weight'=>60 ),
//array('host'=>'server2', 'port'=>11211, 'weight'=>40),
),
),
I know the Memcache is working, because I've tested it with this example outside of the YII framework:
$memcache = new Memcache;
$memcache->connect("localhost",11211);
$tmp_object = new stdClass;
$tmp_object->str_attr = "test";
$memcache->set("mysupertest",$tmp_object,false,5);
var_dump($memcache->get("mysupertest"));
This works and the item is cached for 5 seconds...
It seems like it's a bug in CMemCache.php. There is this function:
protected function setValue($key,$value,$expire)
{
if($expire>0)
$expire+=time();
else
$expire=0;
return $this->useMemcached ? $this->_cache->set($key,$value,$expire) : $this->_cache->set($key,$value,0,$expire);
}
MemCache does not want the time to be added, so my quick fix is:
protected function setValue($key,$value,$expire)
{
return $this->useMemcached ? $this->_cache->set($key,$value,$expire) : $this->_cache->set($key,$value,0,$expire);
}
Well, make sure $auctions are well defined.
Yii::app()->cache->set('someKey', array('someValue'), 120); // 120 means 2 minutes
print_r(Yii::app()->cache->get('someKey')); // you should see the array with the single value, I do see it when I try to run it
Make sure the config is OK and you are not using CDummyCache. Mine looks like this:
'components' => array(
...
// Add a cache component to store data
// For demo, we are using the CFileCache, you can use any
// type your server is configured for. This is the simplest as it
// requires no configuration or setup on the server.
'cache' => array (
'class' => 'system.caching.CFileCache',
),
...
),
I am trying to get the new PDO driver running in Code Igniter 2.1.1 in (to start with) the local (Mac OS 10.7) copy of my app.
I initially coded it using Active Record for all db operations, and I am now thinking I want to use PDO prepared statements in my model files, going forward.
I modified 'application/config/database.php' like so:
(note a couple minor embedded questions)
[snip]
$active_group = 'local_dev';
$active_record = TRUE;//<---BTW, will this need to stay TRUE to make CI sessions work? For better security, don't we want db-based CI sessions to use PDO too?
//http://codeigniter.com/user_guide/database/configuration.html:
//Note: that some CodeIgniter classes such as Sessions require Active Records be enabled to access certain functionality.
//this is the config setting that I am guessing (?) is my main problem:
$db['local_dev']['hostname'] = 'localhost:/tmp/mysql.sock';
// 1.) if $db['local_dev']['dbdriver']='mysql', then here ^^^ 'localhost:/tmp/mysql.sock' works, 2.) but if $db['local_dev']['dbdriver']='pdo', then it fails with error msg. shown below.
$db['local_dev']['username'] = 'root';
$db['local_dev']['password'] = '';
$db['local_dev']['database'] = 'mydbname';
$db['local_dev']['dbdriver'] = 'pdo';
$db['local_dev']['dbprefix'] = '';
$db['local_dev']['pconnect'] = TRUE;
$db['local_dev']['db_debug'] = TRUE;//TRUE
$db['local_dev']['cache_on'] = FALSE;
$db['local_dev']['cachedir'] = '';
$db['local_dev']['char_set'] = 'utf8';
$db['local_dev']['dbcollat'] = 'utf8_general_ci';
$db['local_dev']['swap_pre'] = '';
$db['local_dev']['autoinit'] = TRUE;
$db['local_dev']['stricton'] = FALSE;
[snip]
With the above config., as soon as I load a controller, I get this error message:
Fatal error: Uncaught exception 'PDOException' with message 'could not find driver' in
/Library/WebServer/Documents/system/database/drivers/pdo/pdo_driver.php:114 Stack trace: #0
/Library/WebServer/Documents/system/database/drivers/pdo/pdo_driver.php(114): PDO->__construct('localhost:/tmp/...', 'root', '', Array) #1 /Library/WebServer/Documents/system/database/DB_driver.php(115): CI_DB_pdo_driver->db_pconnect() #2
/Library/WebServer/Documents/system/database/DB.php(148): CI_DB_driver->initialize() #3
/Library/WebServer/Documents/system/core/Loader.php(346): DB('', NULL) #4
/Library/WebServer/Documents/system/core/Loader.php(1171): CI_Loader->database() #5
/Library/WebServer/Documents/system/core/Loader.php(152): CI_Loader->_ci_autoloader() #6
/Library/WebServer/Documents/system/core/Con in
/Library/WebServer/Documents/system/database/drivers/pdo/pdo_driver.php on line 114
I tried swapping out the 'pdo_driver.php' file from the one on github, as per this:
http://codeigniter.com/forums/viewthread/206124/
...but that just generates other errors, not to mention is disturbing to a newbie who does not want to touch the system files if at all possible.
This thread also seems to imply the need to be hacking the 'pdo_driver.php' system file:
CodeIgniter PDO database driver not working
It seems odd to me, though, that (someone thought that) a hack to a system file is needed to make PDO work in CI v.2.1.1, huh?
Thanks for any suggestions I can try.
I don't know if this might be helpful for you since you already started using the CI functions, but I made my own library for PDO with sqlite and just auto load it. My needs were simple, so it serves its purpose.
<?php if ( ! defined('BASEPATH')) exit('No direct script access allowed');
/**
* CodeIgniter PDO Library
*
*
* #author Michael Cruz
* #version 1.0
*/
class Sqlite_pdo
{
var $DB;
public function connect($path) {
try {
$this->DB = new PDO('sqlite:' . $path);
}
catch(PDOException $e) {
print "Error: " . $e->getMessage();
die();
}
}
public function simple_query($SQL) {
$results = $this->DB->query($SQL)
or die('SQL Error: ' . print_r($this->DB->errorInfo()));
return $results;
}
public function prepared_query($SQL, $bind = array()) {
$q = $this->DB->prepare($SQL)
or die('Prepare Error: ' . print_r($this->DB->errorInfo()));
$q->execute($bind)
or die('Execute Error: ' . print_r($this->DB->errorInfo()));
$q->setFetchMode(PDO::FETCH_BOTH);
return $q;
}
public function my_prepare($SQL) {
$q = $this->DB->prepare($SQL)
or die('Error: ' . print_r($this->DB->errorInfo()));
return $q;
}
public function my_execute($q, $bind) {
$q->execute($bind)
or die('Error: ' . print_r($this->DB->errorInfo()));
$q->setFetchMode(PDO::FETCH_BOTH);
return $q;
}
public function last_insert_id() {
return $this->DB->lastInsertId();
}
}
/* End of file Sqlite_pdo.php */
thanks to the noob thread http://codeigniter.com/forums/viewthread/180277/ (InsiteFX’s answer)..
I figured out the below seems to work (need to test more to be 100%... but at least the error messages are gone:
$db['local_dev']['hostname'] = 'mysql:host=127.0.0.1';
EDIT
I tried debugging this with xdebug and netbeans. It's weird that the exports will work during the debug session if I put in some breakpoints. However, with no break points, a more realistic environment, the exports don't work.
I've tried adding sleeps into some parts of the code.
I think that maybe PHP is ending before the Redis commit is completed. Maybe the Redis connections are being done asynchronously, but I checked PRedis and the default is a synchronous connection.
I am working on a reporting tool.
Here is the basic issue.
We store a report into the session object but on later requests when we try to get to the report in the session object it's gone.
Here is a more detailed version.
I store a 'report' object into the session like so
$_SESSION['report_name_unixtimestamp'] = gzcompress( serialize( $reportObject ) );
The user sees the report in some table form and then if they want they can export it. The report could change so the idea behind storing it in the session like this is that when the user exports it to PDF, Excel, etc, they'll be getting a report identical to the one they are viewing.
The user clicks on an export button and on the PHP side it will go into the session, fetch the report via the key provided as a get parameter (uncompresses and unserializes it), create the export and send it to the user for download.
This has worked well up until the point that we tried to introduce the Redis caching server as a tool for better session management.
What happens now is the following:
The first time we run the report it will get stored into the cache and the export will work successfully.
We will run the report again, with the same user account in the same session. This changes the unixtimestamp and so there should be two entries in the $_SESSION. ( $_SESSION['report_name_oldertimetamp'] and $_SESSION['report_name_newertimestamp'] ). When we click on the export button again we get an error saying that the file doesn't exist ( because it hasn't been sent by the server ).
If we check the redis server for the newer version of the report it isn't there, but the old timestamp is still there.
Now, this worked with the file session management but not with Redis. we've tried the redis module for php as well as the pure php client Predis.
Does anyone have any ideas?
Here are a few more details :
Redis has NOT run out of memory. We've checked this many times.
We already know that to unserialize the report object in the session the report class has to be included already. ( remember, the first export works fine but anything after that fails )
If we check the php session object during the request that the report is running on, it WILL contain the newer report but it never makes it to Redis.
Below is the save handler that is being used with Predis.
The redis_session_init is the function I call right before session_start() so that it gets registered. I'm not sure how the redis_session_write function works though so maybe someone can help me with that.
<?php
namespace RedisSession
{
$redisTargetPrefix = "PHPREDIS_SESSION:";
$unpackItems = array( );
$redisServer = "tcp://cache.emcweb.com";
function redis_session_init( $unpack = null, $server = null, $prefix = null )
{
global $unpackItems, $redisServer, $redisTargetPrefix;
if( $unpack !== null )
{
$unpackItems = $unpack;
}
if( $server !== null )
{
$redisServer = $server;
}
if( $prefix !== null )
{
$redisTargetPrefix = $prefix;
}
session_set_save_handler( 'RedisSession\redis_session_open', 'RedisSession\redis_session_close', 'RedisSession\redis_session_read', 'RedisSession\redis_session_write', 'RedisSession\redis_session_destroy', 'RedisSession\redis_session_gc' );
}
function redis_session_read( $id )
{
global $redisServer, $redisTargetPrefix;
$redisConnection = new \Predis\Client( $redisServer );
return base64_decode( $redisConnection->get( $redisTargetPrefix . $id ) );
}
function redis_session_write( $id, $data )
{
global $unpackItems, $redisServer, $redisTargetPrefix;
$redisConnection = new \Predis\Client( $redisServer );
$ttl = ini_get( "session.gc_maxlifetime" );
$redisConnection->pipeline( function ($r) use (&$id, &$data, &$redisTargetPrefix, &$ttl, &$unpackItems)
{
$r->setex( $redisTargetPrefix . $id, $ttl, base64_encode( $data ) );
foreach( $unpackItems as $item )
{
$keyname = $redisTargetPrefix . $id . ":" . $item;
if( isset( $_SESSION[ $item ] ) )
{
$r->setex( $keyname, $ttl, $_SESSION[ $item ] );
}
else
{
$r->del( $keyname );
}
}
} );
}
function redis_session_destroy( $id )
{
global $redisServer, $redisTargetPrefix;
$redisConnection = new \Predis\Client( $redisServer );
$redisConnection->del( $redisTargetPrefix . $id );
$unpacked = $redisConnection->keys( $redisTargetPrefix . $id . ":*" );
foreach( $unpacked as $unp )
{
$redisConnection->del( $unp );
}
}
// These functions are all noops for various reasons... opening has no practical meaning in
// terms of non-shared Redis connections, the same for closing. Garbage collection is handled by
// Redis anyway.
function redis_session_open( $path, $name )
{
}
function redis_session_close()
{
}
function redis_session_gc( $age )
{
}
}
The issue was solved and it was much dumber than I thought.
The save handler doesn't implement locking in any way. On the report pages there are multiple requests being made to the server via ajax and the like. One of the ajax requests starts before the report gets saved to session space. Thus, it reads the session, then writes the session at the end.
Since the reports executes faster every time, the report would get cached to the session in Redis but would then be overwritten by the other script that had an older version of the sessien.
I had help from one of my co-workers. Ugh! This was a headache I'm glad to be over.