I'm using Laravel 4 and Maatwebsite Laravel Excel to import form CSV files.
I managed to import each rows in the csv file successfully.
How ever, when it comes to empty files, the site returns: Row 2 is out of range (2 - 1).
Here is my code:
Excel::load($source, function($reader) {
$reader->all();
})
Is there any way to bypass empty row or to verify row count before process?
Use this method to get row count. Then manipulate it.
->getTotalRowsOfFile()
Related
I have three column with different data. every column store data in json format, but now i want to display them, but its not working: Bellow my code:
Response:
"[{"type":"pllet-a","price_km":50,"size":28000,},{"type":"pellet-b","price_km":40,"size":20000,}]"
I want to display only value Example (50 km) in foreach loop. What is the way to fix it into laravel blade file. I have tired json_decode but not working.
I have an interactive report and the goal is to let the user export the data but without 1 column.
What I tried for the column in question:
1) Server-side condition - Request is NOT contained in value; Value - CSV, HTML
2) NVL(:REQUEST,'EMPTY') not in ('CSV','HTMLD')
The column I am trying to not export is a link with an icon. I tried changing it to 'Plain text' but to no avail.
Oracle Apex version 21.2.0
I managed to accomplish it using this:
instr(nvl(:REQUEST,'~'),'HTML') = 0 and instr(nvl(:REQUEST,'~'),'CSV') = 0
The approach described earlier worked for me with APEX 21.x.
However in APEX 22.1 the CSV download seems to be different; is this correct? For me the values are no longer evaluated again during download and thus I can not control this (we have a server side condition for that that accesses a page Item which we fill with a logic evaluating :REQUEST).
Helo everyone need your help.
I had one file that I get from daily report download with xlsx extension.
Then I want to insert to MongoDB for analytic cause per day can be 100K rows, and I use golang with excelize to extract excel data then insert them to mongoDB, there is the problem :
When I GetRows("Sheet 1") the result is 0 and when I check the sheet with GetSeetMap() result is [0:Sheet 1] and it is still 0 rows.
But when I renamed sheet name (ie: rename to another Sheet) and I check the sheet map it change to [1:another Sheet] and rows detected when the sheet map key is 1, how to fix this?
Thanks for advance
Are you sure it shouldn't be "Sheet1" (i.e. without a space)?
I am using Copy command to load data from CSV file to a table using internal Stage.
After loading data I am using below code to get number of rows loaded and failed.
Select * from table(Result_Scan('Copy_Query_ID'))
I am also using below query to get actual failed records:
select * from table(validate("Table_Name",job_id=>'Copy_Query_ID'))
it worked fine few times. But I noticed today that first query shows as below:
Rows_Parsed Rows_Loaded Error_Seen
10000 9600 400
So I was expecting 400 rows in second query result but instead I see 10400 records:
All rows once and additional 400 records for some other errors. If all rows are error rows then why are they loaded? Can i not use this queries for this purpose?
Note- In my file I have 6 fields but I am using only 4 of them in Copy and rest two fields I am getting using SYSdate(), may be this is the reason for mismatch?
Copy into table(col1,col2,col3,col4,col5,col6) from ( select $1,$2,$3,$4, sysdate(),'10/20/2020' from %#table)
so I am guessing validate is not looking at my new values for field 5,6 instead it is taking these values from file?
I would like to know how I can split an excel document into multiple excel documents with the records split equally.
For example If I have an excel of 200 records I have to split it through UIpath and give 2 Excel documents of 100 records each as Output.
Read the Excel file to a datatable.
for each row the datatable and put the row value to 1. new datatable
check if your row number ( counter ) is > than datatable.rows.count/2. If yes, put the res on 2. new datatable.
Save both new datatable in 2 difference Excels files.