I have ItemResource reference in data written into redis - laravel

I use resources on api site to extend data(labels, formatting, related data ...).
But when I used ItemResource writing into redis :
$activeItems = Item::with('creator')->getByPublished(true)->get();
foreach( $activeItems as $item ) {
$redisUniqueKey = CachingDataTypeEnum::cdtItem . '_' . $item->id;
Redis::set($redisUniqueKey, serialize( new ItemResource($item->toArray())) );
Redis::expire($redisUniqueKey, $itemsCachingHours * 3600 *100);
$generatedItemsCount++;
}
Resulting item under redis has header like :
O:34:"App\Http\Resources\ItemResource":3:{s:8:"resource";a:10:{s:2:"id";i:4;s:5:"title";s:37:"Item C...
I dislike this ItemResource reference and wonder could it raise some problems in future using of these redis data ?
How it can be fixed ?
"laravel/framework": "^9.47",
"predis/predis": "^2.1",

Related

joomla plugin works correctly without caching and does not work with caching

if caching is turned off then everything works correctly. if you enable caching, the plugin at the beginning works correctly, but after 5 minutes it stops processing the line. normal or progressive caching — it doesn’t matter. when you delete the cache — processing is turned on again, and again after 5 minutes it disappears.
here is the complete plugin code. what could be the reason?
code inserted into the material for example such {robokassa 5}
class plgContentRobokassa extends JPlugin
{
public $cont='';
public function onContentPrepare($context, &$row, &$params, $page = 0)
{
$doc = JFactory::getDocument();
$doc->addStyleSheet(JURI::root(true).'/plugins/content/robokassa/css/robokassa.css');
$this->cont=$context;
}
public function onAfterRender()
{
$is_test='0';
$mrh_pass1='*****';
$mrh_login='******';
$app = JFactory::getApplication();
if ($app->getName() != 'site') {
return true;
}
// Получаем кодовое слово из параметров
$varname = 'robokassa';
//Получаем тело сайта
$html = $app->getBody();
// Если тегов нет
if (strpos($html, $varname) === false)
{
return true;
}
$bodyPos = stripos($html, '<body');
$preContent = '';
if ($bodyPos > -1)
{
$preContent = substr($html, 0, $bodyPos);
$html = substr($html, $bodyPos);
}
//Задаем шаблон поиска
$pattern = '#\{' . $varname . ' ([0-9]+)\}#i';
//Закидываем все найденные шаблоны в массив
if (preg_match_all($pattern, $html, $matches))
{
$db = JFactory::getDbo();
$query = $db->getQuery(true);
foreach ($matches[0] as $i => $match)
{
*replace code here*
}
$html=$preContent.$html.$script_alert;
//Запихиваем всё обратно в тело
$app->setBody($html);
}
}
}
The plugin events are invoked in the order they have in the Plugin administration.
Cache is part of the System plugins so when their cache is valid the response is retrieved from Cache, and the plugins which come before it are not executed.
You could be able to solve this by moving your plugin after Cache (or last)
If the plugin is of a different kind, i.e. Content, there is no way to achieve this.
You can change it into a System plugin (all events from any plugin category is available also in the System plugins).
Another alternative, you could put your code in a module, prevent caching in the module; but this will not work with page cache.
Finally, and I don't know why I didn't write it first, make an Ajax call to a page which you will NOT cache (by excluding it in the page cache plugin configuration: that way the page will be cached and each time it will retrieve the current data.

TYPO3 9.5 Extbase plugin cache implementation

I am trying to get a cache working in my plugin.
In ext_localconf.php
if (!is_array($GLOBALS['TYPO3_CONF_VARS']['SYS']['caching']['cacheConfigurations']['myextension'])) {
$GLOBALS['TYPO3_CONF_VARS']['SYS']['caching']['cacheConfigurations']['myextension'] = [];}
if (!isset($GLOBALS['TYPO3_CONF_VARS']['SYS']['caching']['cacheConfigurations']['myextension']['frontend'])) {
$GLOBALS['TYPO3_CONF_VARS']['SYS']['caching']['cacheConfigurations']['myextension']['frontend'] = 'TYPO3\\CMS\\Core\\Cache\\Frontend\\StringFrontend';}
if (!isset($GLOBALS['TYPO3_CONF_VARS']['SYS']['caching']['cacheConfigurations']['myextension']['options'])) {
$GLOBALS['TYPO3_CONF_VARS']['SYS']['caching']['cacheConfigurations']['myextension']['options'] = ['defaultLifetime' => 0];}
if (!isset($GLOBALS['TYPO3_CONF_VARS']['SYS']['caching']['cacheConfigurations']['myextension']['groups'])) {
$GLOBALS['TYPO3_CONF_VARS']['SYS']['caching']['cacheConfigurations']['myextension']['groups'] = ['pages'];}
In my controller action :
$cacheIdentifier = 'topic' . $topic->getUid();
$cache = GeneralUtility::makeInstance(CacheManager::class)->getCache('myextension');
$result = unserialize($cache->get($cacheIdentifier));
if ($result !== false ) {
DebuggerUtility::var_dump($result);
} else {
$result = $this->postRepository->findByTopic($topic->getUid(), $page, $itemsPerPage);
$cache->set($cacheIdentifier, serialize($result), ['tag1', 'tag2']);
DebuggerUtility::var_dump($result);
}
The first time the page with the action gets loaded all is ok and the entry has been made in de database (cf_myextension and cf_myextension_tags}.
But the 2nd time the cache gets loaded and I get an error. Even DebuggerUtility::var_dump($result); does not work:
Call to a member function map() on null
in ../typo3/sysext/extbase/Classes/Persistence/Generic/QueryResult.php line 96
*/
protected function initialize()
{
if (!is_array($this->queryResult)) {
$this->queryResult = $this->dataMapper->map($this->query->getType(), $this->persistenceManager->getObjectDataByQuery($this->query));
}
}
/**
A normal var_dump works and spits out the cache entry. What is the problem? Do I forget something? Can't a QueryResult together with some other variables not be stored as an array in the cache? I also tried VariableFrontend cache, which produced the same error.
The E-tools GUI application indents and pretty formats HTML, JavaScript, JSON and SQL. To install the e-tools snap package in all currently supported versions of Ubuntu open the terminal and type:
sudo snap install e-tools

No definition found for Table yahoo.finance.xchange

I have a service which uses a Yahoo! Finance table yahoo.finance.xchange. This morning I noticed it has stopped working because suddenly Yahoo! started to return an error saying:
{
"error": {
"lang": "en-US",
"description": "No definition found for Table yahoo.finance.xchange"
}
}
This is the request URL. Interesting fact: if I try to refresh the query multiple times, sometimes I get back a correct response but this happen very rarely (like 10% of the time). Days before, everything was fine.
Does this mean Yahoo API is down or am I missing something because the API was changed? I would appreciate any help.
Since I have the same problem and that it started today too, that others came to post exactly in the same time as well, and that it still works most of the time, the only explanation I can find is that they have some random database errors on their end and we can hope that this will be solved soon. I also have a 20% rate of failures when refreshing the page of the query.
My guess is that they use many servers to handle the requests (let's say 8) and that one of them is empty or doesn't have that table for some reasons so whenever it directs the query to that server, the error is returned.
Temporary solution: Just modify your script to retry 3-4 times. That did it for me because among 5 attempts at least one succeeds.
I solve this issue by using quote.yahoo.com instead of the query.yahooapis.com service. Here's my code:
function devise($currency_from,$currency_to,$amount_from){
$url = "http://quote.yahoo.com/d/quotes.csv?s=" . $currency_from . $currency_to . "=X" . "&f=l1&e=.csv";
$handle = fopen($url, "r");
$exchange_rate = fread($handle, 2000);
fclose($handle );
$amount_to = $amount_from * $exchange_rate;
return round($amount_to,2);
}
EDIT the above no longer works. At this point, lets just forget about yahoo lol Use this instead
function convertCurrency($from, $to, $amount)
{
$url = file_get_contents('https://free.currencyconverterapi.com/api/v5/convert?q=' . $from . '_' . $to . '&compact=ultra');
$json = json_decode($url, true);
$rate = implode(" ",$json);
$total = $rate * $amount;
$rounded = round($total);
return $total;
}
Same error, i migrate to http://finance.yahoo.com
Here is C# example
private static readonly ILog Log = LogManager.GetCurrentClassLogger();
private int YahooTimeOut = 4000;
private int Try { get; set; }
public decimal GetRate(string from, string to)
{
var url =
string.Format(
"http://finance.yahoo.com/d/quotes.csv?e=.csv&f=sl1d1t1&s={0}{1}=X", from, to);
var request = (HttpWebRequest)WebRequest.Create(url);
request.UseDefaultCredentials = true;
request.ContentType = "text/csv";
request.Timeout = YahooTimeOut;
try
{
using (var response = (HttpWebResponse)request.GetResponse())
{
var resStream = response.GetResponseStream();
using (var reader = new StreamReader(resStream))
{
var html = reader.ReadToEnd();
var values = Regex.Split(html, ",");
var rate = Convert.ToDecimal(values[1], new CultureInfo("en-US"));
if (rate == 0)
{
Thread.Sleep(550);
++Try;
return Try < 5 ? GetRate(from, to) : 0;
}
return rate;
}
}
}
catch (Exception ex)
{
Log.Warning("Get currency rate from Yahoo fail " + ex);
Thread.Sleep(550);
++Try;
return Try < 5 ? GetRate(from, to) : 0;
}
}
I've got the same issue.
I need exchange rates in my app, so I decided to use currencylayer.com API instead - they give 168 currencies, including precious metals and Bitcoin.
I've also written a microservice using webtask.io to cache rates from currencylayer and do cross-rate calculations.
And I've written a blog post about it 🤓
Check it out if you want to run your own microservice, it's pretty easy 😉
I found solution, in my case, just change http to https and everything works fine.

Return Image as http-response from Listener Symfony2

This listener sends me reports on all kinds of exceptions that occur on the website. Sometimes I get reports of images that have been deleted but are still consulted by search engines and others.
I want to do, instead of displaying an error message " 404 Not Found " return the correct image. To do this I created a database table that stores the old links and new links of the images that have been deleted, moved or changed its name.
then, this listener find in db the links to fallen and gets the new links of images . The goal is to return the image as http-response with header content-type as image.
My code is:
class ExceptionListener
{
private $service_container;
private $router;
function __construct(Container $service_container, $router){
$this->service_container = $service_container;
$this->router = $router;
}
public function onKernelException(GetResponseForExceptionEvent $event){
$exception = $event->getException();
$request = $this->service_container->get('request');
...
$document_root = $request->server->get('DOCUMENT_ROOT');
$filename = realpath($document_root . '/'. '/path/to/new_image.jpg');
$response = new \Symfony\Component\HttpFoundation\Response();
$response->headers->set('Content-type', 'image/jpeg');
$response->headers->set('Content-Disposition', 'inline; filename="' . basename($filename) . '";');
$response->headers->set('Content-length', filesize($filename));
$response->sendHeaders();
$response->setContent(file_get_contents($filename));
return $response;
...
}
}
The following error occurs:
In the browser you can see a small box , it is as if trying to show the image but the image source could not be obtained . But if the same code is testing on controller , its working properly and the image is displayed .
What can I do to return image from a listener ? thanks
There's few things wrong about the code snippet from your question.
Firstly, you should never use the request service. It's deprecated since Symfony 2.4 and was removed in Symfony 3.0. Use the request stack (request_stack) instead.
Secondly, do not send the response yourself, but let the framework do it. Symfony events system is designed for flexibility (see the docs). In your case it's enough to set the response on the event object.
Finally, you don't need the service container to access the request at all, as it's available on the event.
Moreover, instead of the standard Response class you can use the BinaryFileResponse. It's purpose is to serve files (have a look at the docs).
You can greatly simplify your listener:
use Symfony\Component\HttpFoundation\BinaryFileResponse;
class ExceptionListener
{
private $router;
function __construct($router)
{
$this->router = $router;
}
public function onKernelException(GetResponseForExceptionEvent $event)
{
$exception = $event->getException();
// request is avialable on the event
$request = $event->getRequest();
// ...
$file = 'path/to/file.txt';
$response = new BinaryFileResponse($file);
// ... set response parameters ...
// finally, set the response on your event
$event->setResponse($response);
}
}

PHP Redis Session not saving

EDIT
I tried debugging this with xdebug and netbeans. It's weird that the exports will work during the debug session if I put in some breakpoints. However, with no break points, a more realistic environment, the exports don't work.
I've tried adding sleeps into some parts of the code.
I think that maybe PHP is ending before the Redis commit is completed. Maybe the Redis connections are being done asynchronously, but I checked PRedis and the default is a synchronous connection.
I am working on a reporting tool.
Here is the basic issue.
We store a report into the session object but on later requests when we try to get to the report in the session object it's gone.
Here is a more detailed version.
I store a 'report' object into the session like so
$_SESSION['report_name_unixtimestamp'] = gzcompress( serialize( $reportObject ) );
The user sees the report in some table form and then if they want they can export it. The report could change so the idea behind storing it in the session like this is that when the user exports it to PDF, Excel, etc, they'll be getting a report identical to the one they are viewing.
The user clicks on an export button and on the PHP side it will go into the session, fetch the report via the key provided as a get parameter (uncompresses and unserializes it), create the export and send it to the user for download.
This has worked well up until the point that we tried to introduce the Redis caching server as a tool for better session management.
What happens now is the following:
The first time we run the report it will get stored into the cache and the export will work successfully.
We will run the report again, with the same user account in the same session. This changes the unixtimestamp and so there should be two entries in the $_SESSION. ( $_SESSION['report_name_oldertimetamp'] and $_SESSION['report_name_newertimestamp'] ). When we click on the export button again we get an error saying that the file doesn't exist ( because it hasn't been sent by the server ).
If we check the redis server for the newer version of the report it isn't there, but the old timestamp is still there.
Now, this worked with the file session management but not with Redis. we've tried the redis module for php as well as the pure php client Predis.
Does anyone have any ideas?
Here are a few more details :
Redis has NOT run out of memory. We've checked this many times.
We already know that to unserialize the report object in the session the report class has to be included already. ( remember, the first export works fine but anything after that fails )
If we check the php session object during the request that the report is running on, it WILL contain the newer report but it never makes it to Redis.
Below is the save handler that is being used with Predis.
The redis_session_init is the function I call right before session_start() so that it gets registered. I'm not sure how the redis_session_write function works though so maybe someone can help me with that.
<?php
namespace RedisSession
{
$redisTargetPrefix = "PHPREDIS_SESSION:";
$unpackItems = array( );
$redisServer = "tcp://cache.emcweb.com";
function redis_session_init( $unpack = null, $server = null, $prefix = null )
{
global $unpackItems, $redisServer, $redisTargetPrefix;
if( $unpack !== null )
{
$unpackItems = $unpack;
}
if( $server !== null )
{
$redisServer = $server;
}
if( $prefix !== null )
{
$redisTargetPrefix = $prefix;
}
session_set_save_handler( 'RedisSession\redis_session_open', 'RedisSession\redis_session_close', 'RedisSession\redis_session_read', 'RedisSession\redis_session_write', 'RedisSession\redis_session_destroy', 'RedisSession\redis_session_gc' );
}
function redis_session_read( $id )
{
global $redisServer, $redisTargetPrefix;
$redisConnection = new \Predis\Client( $redisServer );
return base64_decode( $redisConnection->get( $redisTargetPrefix . $id ) );
}
function redis_session_write( $id, $data )
{
global $unpackItems, $redisServer, $redisTargetPrefix;
$redisConnection = new \Predis\Client( $redisServer );
$ttl = ini_get( "session.gc_maxlifetime" );
$redisConnection->pipeline( function ($r) use (&$id, &$data, &$redisTargetPrefix, &$ttl, &$unpackItems)
{
$r->setex( $redisTargetPrefix . $id, $ttl, base64_encode( $data ) );
foreach( $unpackItems as $item )
{
$keyname = $redisTargetPrefix . $id . ":" . $item;
if( isset( $_SESSION[ $item ] ) )
{
$r->setex( $keyname, $ttl, $_SESSION[ $item ] );
}
else
{
$r->del( $keyname );
}
}
} );
}
function redis_session_destroy( $id )
{
global $redisServer, $redisTargetPrefix;
$redisConnection = new \Predis\Client( $redisServer );
$redisConnection->del( $redisTargetPrefix . $id );
$unpacked = $redisConnection->keys( $redisTargetPrefix . $id . ":*" );
foreach( $unpacked as $unp )
{
$redisConnection->del( $unp );
}
}
// These functions are all noops for various reasons... opening has no practical meaning in
// terms of non-shared Redis connections, the same for closing. Garbage collection is handled by
// Redis anyway.
function redis_session_open( $path, $name )
{
}
function redis_session_close()
{
}
function redis_session_gc( $age )
{
}
}
The issue was solved and it was much dumber than I thought.
The save handler doesn't implement locking in any way. On the report pages there are multiple requests being made to the server via ajax and the like. One of the ajax requests starts before the report gets saved to session space. Thus, it reads the session, then writes the session at the end.
Since the reports executes faster every time, the report would get cached to the session in Redis but would then be overwritten by the other script that had an older version of the sessien.
I had help from one of my co-workers. Ugh! This was a headache I'm glad to be over.

Resources