Codeigniter PHPExcel reader displaying data very slow - codeigniter

I am using Codeigniter PHPExcel.
Basically i am using an excel files to read data and display it on the website through PHPExcel. Currently it is taking time to load.
What it basically is doing is creating JSON files through PHPExcel libraries and the data is being read through JSON once the page has been loaded.
But, I am facing a slow load now. When i went through the JSON file i saw that the size is around 3.5 MB and i have more than 3 files through which i am reading the data.
Can anyone suggest me any workarounds for the optimisation? I have read about "Reading in Chunks".
Can we read for the few rows for the first request, like basic filtering we generally do while fetching from database?

May be you should use and DB to prevent loading files for many time? If you use JSON format - MongoDB or PostrgesQL (with json field) will be perfect. Or just parse fields from excel file and load this to normalized DB.

Related

Reading a large CSV file in Azure Logic Apps using Azure Storage Blob

I have a large line delimited (not comma) CSV file (1.2 million lines - 140mb) that contains both data and metadata from a test. The first 50 or so lines are metadata which I need to extract to populate an SQL table.
I have built a Logic App which uses the Azure Blob Storage connector as a trigger. This CSV file is copied into the blob and it triggers the app to do it's stuff. For small files under 50mb this works fine however I get this error for larger files.
InvalidTemplate. Unable to process template language expressions in action 'GetMetaArray' inputs at line '0' and column '0': 'The template language function 'body' cannot be used when the referenced action outputs body has large aggregated partial content. Actions with large aggregated partial content can only be referenced by actions that support chunked transfer mode.'.
The output query is take(split(body('GetBlobContent'), decodeUriComponent('%0D%0A')),100)
The query allows me to put the line delimited meta data into an array so I can perform some queries against it to extract data I convert into variables and use them to check the file for consistency (e.g meta data must meet certain criteria)
I understand that the "Get Blob Content V2" supports chunking natively however from the error it seems like I cannot use the body function to return my array. Can anyone offer any suggestions how I get around this issue? I only need to use a tiny proportion of this file
Thanks Jonny

From selected data into PDF using RDF file

I am currently trying to convert a simple table into a PDF file using an existing .rdf file.
My first approach was to look for a new program that can do so because I want to replace the current 'Oracle Reports' program.
Is there any other program that would support converting SQL data into an PDF using an .rdf File?
I tried writing a Python 3 script to do just that, but I would not know where to start.
Oracle APEX 21.2 (latest at the current time) has a package named APEX_DATA_EXPORT that can take a SELECT statement and export it into various formats, one of them being PDF. The example in the documentation shows how to generate a PDF from a simple query. After calling apex_data_export.export, you can use the BLOB that is returned by the function and do whatever you need with the PDF.
There are not very many options for styling and formatting the table, but Oracle does plan on adding additional printing capabilities for PDFs in the future.

Laravel - Export PDF with huge data (~10k rows, >80 cols)

I'm writing code for feature export report data, I used PhpSpreadsheet with TcfPdf library to export.
But when my data is huge (~10k rows, >80 cols) then output only one blank page.
I tried chunk data and export splitting to multiple files pdf (1.pdf, 2.pdf,...), then merged to one file using pdftk library but still no success.
Additional, if I export multiple columns, pdf did not view all those columns, because paper size of pdf is small.
Does anyone help me? What best solution for export huge data and what library I should use?
Thanks, everyone!
You should give a try to Laravel Snappy PDF
or my second favorite DOMPDF
But for large data I definitely recommend Laravel Snappy PDF

ORA-29285: file write error

I'm trying to extract data from an Oracle table. I'm using utl file for that and I'm receiving the error ORA-29285: file write error. The weird here is if I try extract the data directly from the table return the error, if I extract the data using a simple view the error is returned as well, BUT if I extract the data using a view with an ORDER BY the extraction is well succeed. I can't understand where the error is, I already look for the length of lines and nothing. Any suggestion from which can be?
I extract a lot of other data through the utl_file and I'm well succed. This data in specific is at the first time uploaded to Oracle table directly from a csv file with ANSI encoding. However I have other data uploaded by the same way and then I can export correctly. I checked the encoding too in order to reduce the possible mistakes and I found nothing.
Many thanks,
Priscila Ferreira

Analyzing huge amount of JSON files on S3

I have huge amount of json files, >100TB size in total, each json file is 10GB bzipped, and each line contain a json object, and they are stored on s3
If I want to transform the json into csv (also stored on s3) so I can import them into redshift directly, is writing custom code using hadoop the only choice?
Would it be possible to do adhoc query on the json file without transform the data into other format (since I don't want to convert them into other format first every time I need to do query as the source is growing)
The quickest and easiest way would be to launch an EMR cluster loaded with Hive to do the heavy lifting for this. By using the JsonSerde, you can easily transform the data into csv format. This would only require you to do a insert the data into a CSV formatted table from the JSON formatted table.
A good tutorial for handling the JsonSerde can be found here:
http://aws.amazon.com/articles/2855
Also a good library used for CSV format is:
https://github.com/ogrodnek/csv-serde
The EMR cluster can be short-lived and only necessary for that one job, which can also span across low cost spot instances.
Once you have the CSV format, the Redshift COPY documentation should suffice.
http://docs.aws.amazon.com/redshift/latest/dg/r_COPY.html

Resources