Magento foreach() orders getAllItems() - magento

I am not sure why this loop is not working.
$orders = Mage::getSingleton('sales/order')->getCollection()
->addAttributeToSelect('*')
->addFieldToFilter('created_at', array('from'=>$from, 'to'=>$to))
->addAttributeToSort('increment_id', 'ASC')
;
foreach ($orders as $item) {
$order_id = $item->increment_id;
if (is_numeric($order_id)) $order = Mage::getModel('sales/order')->loadByIncrementId($order_id);
if (is_object($order)) {
echo "> O: ". $order_id ."<BR>";
$items = $order->getAllItems();
echo ">> O: ". $order_id ."<BR>";
} else
die("DIE ". var_dump($order));
}
die("<BR> DONE");
The output:
...
...
>> O: 100021819
> O: 100021820
>> O: 100021820
> O: 100021821
The loop never finishes nor does it stop at the same order_id.?
It always fails at $order->getAllItems()
These orders are either pending, processing or complete.
Is there something I should be checking for with $order->getAllItems(), since that's were it's failing.
Thanks.

Jon, I assume the problem you're talking about is your script ending un expectedly. i.e., you see the output with a single >
> O: 100021821
but not the output with the double >>.
Because Magento is so customizable, it's impossible to accurately diagnose your problem with the information given. Something is happening in your system, (a PHP error, an uncaught exception, etc.), that results in your script stopping. Turn on developer mode and set the PHP ini display_errors to 1 (ini_set('display_errors', 1);) and check your error log. One you (or we) have the PHP error, it'll be a lot easier to help you.
My guess is you're running into a memory problem. The way PHP has implemented objects can lead to small memory leaks — objects don't clean up after themselves correctly. This means each time you go through the loop you're slowly consuming the total amount of memory that's allowed for a PHP request. For a system with a significant number of orders, I'd be surprised if the above code could get through everything before running out of memory.
If your problem is a memory problem, there's information on manually cleaning up after PHP's objects in this PDF. You should also consider splitting your actions into multiple requests. i.e. The first request handles orders 1 - 100, the next 101 - 200, etc.

What do you mean it fails?
By the look of the output it doesn't fail there as it outputs text either side of the call to getAllItems()
change:
$items = $order->getAllItems();
to:
foreach($order->getAllItems() as $orderItem) {
echo $orderItem->getId() . "<br />";
}
and see what happens.
The script could be ending on a different order ID each time if you have a low memory limit set on the server and it quits when it runs out of resources.

Related

Laravel multiple tasks simultaneously

I need to process several image files from a directory (S3 directory), the process is to read the filename (id and type) that is stored in the filename (001_4856_0-P-0-A_.jpg), this file is stored in the moment is invoked the process (im using cron and schedule, it works great) the objetive of the process is to store the info into a database.
I have the process working, it works great but my problem is the number of files that is in the directory, because every second adds a lot more files to the directory, the time spent in the process is about 0.19 sec for file, but the amount of files is huge, about 15,000 per minute is added, so i think a multiple simultaneous process (about 10 - 40 times) of the same original process can do the job.
I need some advice or idea,
First to know how to launch multiple process at the same time of one original process.
Second how to get only the non selected filenames bcause the process takes the filenames with:
$recibidos = Storage::disk('s3recibidos');
if(count($recibidos) <= 0)
{
$lognofile = ['Archivos' => 'No hay archivos para procesar'];
$orderLog->info('ImagesLog', $lognofile);
}else{
$files = $recibidos->files();
if(Image::count() == 0)
{
$last_record = 1;
} else{
$last_record = Image::latest('id')->pluck('id')->first()+1;
}
$i=$last_record;
$fotos_sin_info = 0;
foreach($files as $file)
{
$datos = explode('_',$file);
$tipos = str_replace('-','',$datos[2]);
Image::create([
'client_id' => $datos[0],
'tipo' => $tipos,
]);
$recibidos->move($file,'/procesar/'.$i.'.jpg');
$i++;
}
but i dont figured out how to retrieve only the non selected.
Thanks for your comments.
Using multi-threaded programming in php is possible and has been discussed on so How can one use multi threading in PHP applications.
However this is generally not the most obvious choice for standard applications. A solution for your situation will depend on the exact use-case.
Did you consider a solution using queues?
https://laravel.com/docs/5.6/queues
Or the scheduler?
https://laravel.com/docs/5.6/scheduling

TYPO3 Extbase - Download file is cut off to 40 KB if user is not logged in to the front end

I created a duplicate of a download extension from my colleague which is basically an extension which just provides files to download in the back end.
Problem:
If i try to download a file while the extension is only accessible after login to the back end, then it works perfectly fine
however if I open a private browser window where I am not logged in to the back end, then it always cuts off the file and only download the first 40 KB ... even though it is normally 10 MB. Why is the file cut off?
I can download small files ( < 40KB ) perfectly without them getting cutted off.
NOTE:
Before I edited the extension, the download worked perfectly, even if not logged in to the back end! And the download was triggered the same way
Currently I am comparing the code, but from the logic it looks ok, since I did not changed much (added a new model, renamed the extension and some other stuff)
Does someone have a clue what can lead to this problem?
This is the relevant part in my download controller where I first get the public url of the file by passing the fid of the file and then trigger the download by sending headers.
...
if ($this->request->hasArgument('fid')) {
$this->fid = $this->request->getArgument('fid');
}
if ($this->request->hasArgument('cid')) {
$this->cid = $this->request->getArgument('cid');
}
$fileobj = $this->fileRepository->findByUid($this->fid);
if ($fileobj->getFile() !== null) {
$downloadFilePath = $fileobj->getFile()->getOriginalResource()->getPublicUrl();
if (file_exists($downloadFilePath)) {
$fileCounter = (int)$fileobj->getCounter();
$fileobj->setCounter(++$fileCounter);
$oldChecksum = $fileobj->getChecksume();
$groesse = filesize($downloadFilePath);
if (isset($oldChecksum)) {
$checksum = sha1_file($downloadFilePath);
$fileobj->setChecksume($checksum);
}
// update fileobj
$this->fileRepository->update($fileobj);
// Unset fileobj before persists, otherwise there will be also changes
$this->persistenceManager->persistAll();
// If file exists, force download
$fileName = basename($downloadFilePath);
$this->response->setHeader('Content-Type', "application/force-download", TRUE);
$this->response->setHeader('Content-Disposition', 'attachment; filename=' . $fileName, TRUE);
$this->response->setHeader('Content-Length', $groesse, TRUE);
#readfile($downloadFilePath);
$this->response->sendHeaders();
return true; //i can also delete this line, since it is never reached.
} else {
//send emails to everyone who is entered in the address list in the extension configuration.
$this->sendEmails('missing_file', $fileobj);
$this->redirect(
'list',
'Category',
NULL,
array(
'missing' => array(
'fileId' => $this->fid,
'category' => $this->cid
)
)
);
}
}
The 40 KB file does not contain anything that shouldn't be there, it is just cut off. I tested it by writing alot of numbers in a file line by line and download it, result: only a couple thousand numbers are in the file instead of all numbers.
I tried it with both, files stored at a FTP Server and files stored in user_upload, same result.
Here you can see the 40 KB file:
http://pasteall.org/459911
Snippet (in case if the link is down):
<ul>
<li>0</li>
<li>1</li>
<li>2</li>
<li>3</li>
<li>4</li>
<li>5</li>
<li>6</li>
<li>7</li>
<li>8</li>
<li>9</li>
//Cut because stackoverflow does not allow me to post such big texts
...
<li>3183</li>
<li>3184</li>
<li>3185</li>
<li>3186</li>
<li
You can see that it stops downloading the rest, the question is: why?
UPDATE:
I changed it to this:
// If file exists, force download
$fileName = basename($downloadFilePath);
$this->response->setHeader('Content-Type', "application/force-download", TRUE);
$this->response->setHeader('Content-Disposition', 'attachment; filename=' . $fileName, TRUE);
$this->response->setHeader('Content-Length', $groesse, TRUE);
ob_start();
ob_flush();
flush();
$content = file_get_contents($downloadFilePath);
$this->response->setContent($content);
$this->response->sendHeaders();
return true; //i can also delete this line, since it is never reached.
Now the file is downloaded completly, but the file is now wrapped inside the html from the template. it gets rendered inside the fluid variable mainContent.
Like this:
...
<!--TYPO3SEARCH_begin-->
<div class="clearfix col-sm-{f:if(condition:'{data.backend_layout} == 4',then:'12',else:'9')} col-md-{f:if(condition:'{data.backend_layout} == 4',then:'9',else:'6')} col-lg-{f:if(condition:'{data.backend_layout} == 4',then:'10',else:'8')} mainContent">
<f:format.raw>{mainContent}</f:format.raw>
</div>
<!--TYPO3SEARCH_end-->
...
It gets weirder and weirder...
I finally solved the problem. I just had to execute exit or die after sending the headers:
#readfile($downloadFilePath);
$this->response->sendHeaders();
exit;
NOTE: If you exit your code with exit or die then typo3 session set with e.g. $GLOBALS['TSFE']->fe_user->setKey("ses", "token", DownloadUtility::getToken(32)); won't work anymore if not logged in to the backend! Use $GLOBALS['TSFE']->fe_user->setAndSaveSessionData("token", DownloadUtility::getToken(32)); in this case if no log in should be required.
Now it works even if not logged in to the front end.
But I still don't know why the download worked without being cut off while being logged in to the backend, even though the exit statement was missing. Thats extremly weird and we have no explanation.

Using parfor and labSend/labRecieve

I want to run two matlab scripts in parallel for a project and communicate between them. The purpose of this is to have one script do image analysis and sending the results to the other which will use it for more calculations (time consuming, but not related to the task of finding stuff in the images). Since both tasks are time consuming, and should preferably be done in real time, I believe that parallelization is necessary.
To get a feel for how this should be done I created a test script to find out how to communicate between the two scripts.
The first script takes a user input using the built in function input, and then using labSend sends it to the other, which recieves it, and prints it.
function [blarg] = inputStuff(blarg)
mpiInit(); %added because of error message, but do not work...
for i=1:2
labBarrier; % added because of error message
inp = input('Enter a number to write');
labSend(inp);
if (inp == 0)
break;
else
i = 1;
end
end
end
function [ blarg ] = testWrite( blarg )
mpiInit(); % added because of error message, but does not help
par = 0;
if ( blarg == 0)
par = 1;
end
for i = 1:10
if (par == 1)
labBarrier
delta = labReceive();
i = 1;
else
delta = input('Enter number to write');
end
if (delta == 0)
break;
end
s = strcat('This lab no', num2str(labindex), '. Delta is = ')
delta
end
end
%%This is the file test_parfor.m
funlist = {#inputStuff, #testWrite};
matlabpool(2);
mpiInit(); % added because of error message, but does not help
parfor i=1:2
funlist{i}(0);
end
matlabpool close;
Then, when the code is run, the following error message appears:
Starting matlabpool using the 'local' profile ... connected to 2 labs.
Error using parallel_function (line 589)
The MPI implementation has not yet been loaded. Please
call mpiInit.
Error stack:
testWrite.m at 11
Error in test_parfor (line 8)
parfor i=1:2
Calling the method mpiInit does not help... (Called as shown in the code above.)
And nowhere in the examples that mathworks have in the documentation, or on their website, show this error or what to do with it.
Any help is appreciated!
You would typically use constructs such as labSend, labRecieve and labBarrier within an spmd block, rather than a parfor block.
parfor is intended for implementing embarrassingly parallel algorithms, in other words algorithms that consist of multiple independent tasks that can be run in parallel, and do not require communication between tasks.
I'm stretching my knowledge here (perhaps someone more expert can correct me), but as I understand things, it does not set up an MPI ring for communication between workers, which is probably the explanation for the (rather uninformative) error message you're getting.
An spmd block enables communication between workers using labSend, labRecieve and labBarrier. There are quite a few examples of using them all in the documentation.
Sam is right that the MPI functionality is not enabled during parfor, only during spmd. You need to do something more like this:
spmd
funlist{labindex}(0);
end
(Sam is also quite right that the error message you saw is pretty unhelpful)

BigQuery: 403 User Rate Limit Exceeded but error not shown in joblist

Im reciveing 403 User Rate Limit Exceeded error making querys but I'm sure I'm not exceding.
In the past I've reach the rate limimt doing inserts and It was reflected in the job list as
[errorResult] => Array
(
[reason] => rateLimitExceeded
[message] => Exceeded rate limits: too many imports for this project
)
But in this case the jobs-list doesn't reflect the query (nor error or done), and studing the job-list i haven't reach the limits or have been close to reach it (no more than 4 concurrent querys and each processing 692297 Bytes)
I've the billing active, and I've make only 2.5K querys in the las 28 days.
Edit: The user limit is set up to 500.0 requests/second/user
Edit: Error code recived
User Rate Limit Exceeded User Rate Limit Exceeded
Error 403
Edit: code that I use to make the query jobs and get results
function query_data($project,$dataset,$query,$jobid=null){
$jobc = new JobConfigurationQuery();
$query_object = new QueryRequest();
$dataset_object = new DatasetReference();
$dataset_object->setProjectId($project);
$dataset_object->setDatasetId($dataset);
$query_object->setQuery($query);
$query_object->setDefaultDataset($dataset_object);
$query_object->setMaxResults(16000);
$query_object->setKind('bigquery#queryRequest');
$query_object->setTimeoutMs(0);
$ok = false;
$sleep = 1;
while(!$ok){
try{
$response_data = $this->bq->jobs->query($project, $query_object);
$ok = true;
}catch(Exception $e){ //sleep when BQ API not avaible
sleep($sleep);
$sleep += rand(0,60);
}
}
try{
$response = $this->bq->jobs->getQueryResults($project, $response_data['jobReference']['jobId']);
}catch(Exception $e){
//do nothing, se repite solo
}
$tries = 0;
while(!$response['jobComplete']&&$tries<10){
sleep(rand(5,10));
try{
$response = $this->bq->jobs->getQueryResults($project, $response_data['jobReference']['jobId']);
}catch(Exception $e){
//do nothing, se repite solo
}
$tries++;
}
$result=array();
foreach($response['rows'] as $k => $row){
$tmp_row=array();
foreach($row['f'] as $field => $value){
$tmp_row[$response['schema']['fields'][$field]['name']] = $value['v'];
}
$result[]=$tmp_row;
unset($response['rows'][$k]);
}
return $result;
}
Is there any other rate limits? or it is a bug?
Thanks!
You get this error trying to import CSV files right?
It could be one of these reasons:
Import Requests
Rate limit: 2 imports per minute
Daily limit: 1,000 import requests per day (including failures)
Maximum number of files to import per request: 500
Maximum import size per file: 4GB2
Maximum import size per job: 100GB2
The query() call is, in fact, limited by the 20-concurrent limit. The 500 requests / second / user limit in developer console is somewhat misleading -- this is just the number of total calls (get, list, etc) that can be made.
Are you saying that your query is failing immediately and never shows up in the job list?
Do you have the full error that is being returned? I.e does the 403 message contain any additional information?
thanks
I've solved the problem by using only one server to make the requests.
Looking what I was doing different in the night cronjobs (that never fail) the only diference was I was using only one client in one server instead of using diferent clients in 4 diferent servers.
Now I have only one script in one server that manages the same number of querys and now it never gets the User Rate Limit Exceded error.
I think there is a bug managing many clients or many active IPs at a time, althought the total number of threads never exceds 20.

Magento dataflow takes too long to load CSV file

I have a large CSV file containing Inventory data to update (more than 35,000 rows). I created a method which extends Mage_Catalog_Model_Convert_Adapter_Productimport to do the inventory update. Then I used an Advanced Profile to do the update which calls that method.
It's working very well when I run the profile manually. The problem is when I use an extension which handles the profile running in cronjob, the system takes too long to load and parse the CSV file. I set the cronjob to run everyday at 6:15am, but the first row of the file wouldn't be processed until 1:20pm the same day, it takes 7 hours to load the file.
That makes the process stop in the middle somehow, less than 1/3 records being processed. I've been frustrating trying to figure out why, trying to solve the problem, but no luck.
Any ideas would be appreciated.
Varien_File_Csv is the class that parses your csv file.
It takes too much memory.
Function to log memory amount used and peak memory usage,
public function log($msg, $level = null)
{
if (is_null($level)) $level = Zend_Log::INFO;
$units = array('b', 'Kb', 'Mb', 'Gb', 'Tb', 'Pb');
$m = memory_get_usage();
$mem = #round($m / pow(1024, ($i = floor(log($m, 1024)))), 2).' '.$units[$i];
$mp = memory_get_peak_usage();
$memp = #round($mp / pow(1024, ($ip = floor(log($mp, 1024)))), 2).' '.$units[$ip];
$msg = sprintf('(mem %4.2f %s, %4.2f %s) ', $mem, $units[$i], $memp, $units[$ip]).$msg;
Mage::log($msg, $level, 'my_log.log', 1);
}
$MyClass->log('With every message I log the memory is closer to the sky');
You could split your csv (use same filename) and call the job multiple times. You'll need to be sure a previous call won't run same time with a newer one.
Thanks

Resources