I am running into issues when uploading inventory files using the Dataflow import. I am only importing about 500 items but at record 42 it seems to find an error and just display a blank red bar, no description of the error. If I run the import with less than 40 lines I don't get an error.
To eliminate my format/coding I exported the product stock file, and then tried importing the same file and get the same "blank" errors. I have tried just about everything I could search for without any luck. seeing if anyone else has an idea or something else I could try.
Magento version - 1.9.2.1
You should try by getting your server limits increased:
Some thing like this
max_execution_time = 36000 ; // Maximum execution time of each script, in seconds
max_input_time = 60 ; // Maximum amount of time each script may spend parsing request data
memory_limit = 1024M ; // Maximum amount of memory a script may consume (32MB)
OR,
You try the following code by replace in index.php
ini_set('memory_limit', '1024M');
ini_set('max_execution_time', 36000);
Also Check out Customer CSV export fails with blank page
I might be late but here's what i did to overcome the same issue.
Split your CSV product file into little CSV files, consisting of 40 products each, with the help of a free online CSV splitter - http://splitfile.vavro.me (I had a CSV file of 3000+ products. So it got split into 75 files.)
Then upload all those little CSV files to your server's "var/import/" folder. Now login into your magento's back-end, go to System>Import/Export>Dataflow-Profiles, choose your profile then go to 'Run-Profile'. Here you can see all the uploaded CSV files listed in a drop-down. Now execute the files one by one. Pretty long process, but worth it.
Note that after executing each file, you need to wait for at least 2 minutes to get the indexes refreshed. Or else you might end up getting the same red blank errors again!
Related
I have been running several independent multi-class NL models on an identical data set (to compare performance to a multi-label model) and had no problems importing the data or running the models. I've just been through the identical preparation process, uploaded the file to the bucket and now get this error on import:
Uri is not found in CSV row ",NotWarm".
Warm and NotWarm are my labels. A sample of the csv is below so you can see the format:
"To ensure you get the best possible service, we stagger the cut-off time for next day delivery from 5pm right up until Midnight.",Warm
You’ll be able to see if Next Day Delivery is still available when you place your order.,NotWarm
"You can choose a home delivery option, which lets you have your order delivered to an address of your choice.",Warm
"Some eligible items also let you choose Click + Collect, where your order is delivered to a local store.",NotWarm
I've double checked all the advice about preparing datasets on the AutoML help pages. The file itself has been encoded in UTF-8 using Notepad++ so there should be nothing amiss with the CSV format. The file is identical to those I've used previously except for the labels.
Has something changed on the AutoML NL process as it was over a month since my last model was created?
Thanks in advance for any guidance.
SOLVED
I tagged all my labels with unique numbers to determine which line of data it was failing on upload. Turned out some blank lines had crept into the file so I was trying to assign a label to a null string. Removed the empty lines and all works. Hopefully this may help someone else.
I'm using Oracle Database 12c (12.1.0.1.0 64Bit). Some time ago I wrote a piece of software in pl/sql to import several XML-files. This seemed to work very well, but then some problems occurred. Some files are 5 to 25 MB in size, so it takes one or two minutes to import them. But for some files the import never ends and the importing process even can't be stopped, I have to restart the server to get rid of it.
I traced the problem to the following row:
INSERT INTO SB_BUFFER_XML VALUES (XMLType(bfilename('XMLDATA', '840.xml'), nls_charset_id('AL32UTF8')));
The table SB_BUFFER_XML is of type xmltable and XMLDATA points to a local directoy. The command never finishes with the file 840.xml. But it finishes with the file 613.xml. Both are similar in size, the 613.xml is even bigger:
840.xml: 6.329 KB
613.xml: 6.905 KB
So I started to compare both files looking for the problem:
both files are UTF-8 without BOM
both contain the same structure, but different data
the xml-syntax check finished successful
even in a hex editor both files start and finish with the same characters (so theres no hidden BOM or something)
both files were created in the same system in the same version
So I simply started to delete content from 840.xml to reduce the complexity and I saw that it doesn't matter what I delete. As soon as I delete a specific amount of data, even if it is a comment, the import of this file works flawless.
The strange thing is that I already did import xml-files with the same structure from the same system with a file size of over 20 MB.
Do you have any idea what could cause this problem or what I could check next?
Oracle could reproduce our problem and pointed us to the following bug:
Bug 22843562 - IMPORT OF A XML FILE WITH A COMMENT AT THE END FAILS WITH ORA-27163
This bug is fixed in 12.2. In 12.1 you can try this workaround to activate the old parser:
ALTER SESSION SET EVENTS '31156 trace name context forever, level 0x400';
While the bug title didn't describe our problem, the workaround worked for us nevertheless.
I was trying to build a system (NodeJS + Express 4) that reads a user uploaded text file, process it, and feed it back to the user. I was trying to use ajax upload, and multer as the parser for multi-part data. The whole workflow is supposedly to be like this:
User chooses a local file, and clicks the upload button.
Server received file, and read it.
Server do some processing with the data
Send results back
Every part of the link works except the server read part - sometimes the file is not read fully even though the server signals that the file upload was completed (I have tried multiple libraries, like multer, busboy, formidable that triggers the file upload complete event). I have done various experiments, and here's what I find (with 1000 lines of file):
the fs.readFile sometimes ends prematurely. The result file can be anywhere between 100 - 1000 lines.
missing part is almost always the last small piece, feels like the pipe was not fully flushed yet. I have tried file size between 1000 lines to 200,000 lines, and it's always missing the last few hundred lines.
using streaming almost solved the issue - like createReadStream, or byline, line-by-line, but sometimes the result can be 'undefined' or missing the last few lines, but a lot less frequent.
trigger the read twice, and the second time is almost guaranteed to read the full 1000 lines.
Is there anyway to force NodeJS to 'flush' the uploaded file? Somehow I feel the upload complete event was triggered (regardless of library, and everyone is dependent on FileSystem I guess) before the last piece of file was flushed in the stream. Or maybe there are some other issues - reading a static files always give the correct results. I could use the http POST forms but I'd like to use ajax to improve user experience.
Any thoughts?
I have made a little function that deletes files based on date. Prior to doing the deletions, it lets the user choose how many days/months back to delete files, telling them how many files and how much memory it would clean up.
It worked great in my test environment, but when I attempted to test it on a larger directory (approximately 100K files), it hangs.
I’ve stripped everything else from my code to ensure that it is the get_dir_info() function that is causing the issue.
$this->load->helper('file');
$folder = "iPad/images/";
set_time_limit (0);
echo "working<br />";
$dirListArray = get_dir_file_info($folder);
echo "still working";
When I run this, the page loads for approximately 60 seconds, then displays only the first message “working” and not the following message “still working”.
It doesn’t seem to be a system/php memory problem as it is coming back after 60 seconds and the server respects my set_time_limit() as I’ve had to use that for other processes.
Is there some other memory/time limit I might be hitting that I need to adjust?
from the CI user guide the get_dir_file_info() is:
Reads the specified directory and builds an array containing the filenames, filesize, dates, and permissions. Sub-folders contained within the specified path are only read if forced by sending the second parameter, $top_level_only to FALSE, as this can be an intensive operation.
so if you are saying that you have 100k files then the best way to do it, is to cut it into two steps:
First: use get_filenames('path/to/directory/') to retrieve all your files without their information.
Second: use get_file_info('path/to/file', $file_information) to retrieve a specific file info, as you might not need all the file information immediately. it can be done on file name click or something relevant.
the idea here is not to force your server to deal with large amount of process while in production. that would kill two things, responsiveness, and performance (I haven't found a better definition for performance) but the idea here is clear.
I have a function (I'm using CodeIgniter) that uploads a file, resizes it, and saves details into a database.
I have no problems in uploading images up to 1MB, so I know that permissions work ok.
However, as soon as I try to upload something above 1MB, the function becomes really slow, and after a while I'm presented with a blank page.
These are the main values in the php ini file:
post_max_size: 32M
max_input_time: 60
max_execution_time: 30
file_uploads: 1
upload_max_filesize: 32M
According to this I should have plenty of time and megabytes to upload the file successfully.
What else this could depend on?
UPDATE (following Mike's and Minboost questions below)
a. logs are clean, no sign of problems there and actually the log shows that the page has been processed on 0.03 seconds!
b. Memory_limit is 96 MB
c. I'm not applying XSS filters on this
...any additional ideas?
the thing i don't understand is that it takes a very long time to upload a file even on my Mac (localhost); i've managed to upload a 2.7mb picture, however i had to wait there for a few minutes. there seem to be a step change (for the worse) above the 500KB threshold. Upload is smooth and fast below that, and becomes very slow above it..
It could also depend on memory_limit.
Are you checking error_logs? What are the errors returned? Make sure you're not XSS filtering the upload file form field. Also I've had to try this before:
set the max_allowed_packets higher in /etc/my.cnf and restart MySQL.