From selected data into PDF using RDF file - oracle

I am currently trying to convert a simple table into a PDF file using an existing .rdf file.
My first approach was to look for a new program that can do so because I want to replace the current 'Oracle Reports' program.
Is there any other program that would support converting SQL data into an PDF using an .rdf File?
I tried writing a Python 3 script to do just that, but I would not know where to start.

Oracle APEX 21.2 (latest at the current time) has a package named APEX_DATA_EXPORT that can take a SELECT statement and export it into various formats, one of them being PDF. The example in the documentation shows how to generate a PDF from a simple query. After calling apex_data_export.export, you can use the BLOB that is returned by the function and do whatever you need with the PDF.
There are not very many options for styling and formatting the table, but Oracle does plan on adding additional printing capabilities for PDFs in the future.

Related

ORA-29285: file write error

I'm trying to extract data from an Oracle table. I'm using utl file for that and I'm receiving the error ORA-29285: file write error. The weird here is if I try extract the data directly from the table return the error, if I extract the data using a simple view the error is returned as well, BUT if I extract the data using a view with an ORDER BY the extraction is well succeed. I can't understand where the error is, I already look for the length of lines and nothing. Any suggestion from which can be?
I extract a lot of other data through the utl_file and I'm well succed. This data in specific is at the first time uploaded to Oracle table directly from a csv file with ANSI encoding. However I have other data uploaded by the same way and then I can export correctly. I checked the encoding too in order to reduce the possible mistakes and I found nothing.
Many thanks,
Priscila Ferreira

how to save a text file to hive using table of context as schema

I have many project reports in text format (word and pdf). These files contains data that I want to extract; Such as references, keywords, names mentioned .......
I want to process these files with Apache spark and save the result to hive,
use the power of dataframe (use the table of context as schema) is that possible?
May you share with me any ideas about how to process these files?
As far as I understand, you will need to parse the files using Tika and manually create custom schema s as described here.
Let me know if this helps. Cheers.

Mass Export of BLOB data to CSV

I am working with an older Oracle database, I don't know which version of oracle, sorry, and I need to do a mass export of 200,000+ files worth of HTML data stored in BLOBs. I have downloaded and used both Toad and SQLDeveloper (Oracle's own DB GUI tool) and at best I am able to properly extract the HTML for a single row at a time.
Is there a way (query, tool, other GUI, etc...) that I can reliably do a mass export of all the BLOB data on this table to a CSV format?
Thank You.
You can use utl_file built-in package through this you can write blob data to a file.
Refer here.
I found this tool.
It works incredibly well for extracting content of any type out of any sort of LOB to a file type (HTML in this case). Takes about an hour to do 200,000 records though

How to import only some columns from XLS with ETL?

I want to do something like Read only certain columns from xls in Jaspersoft ETL Express 5.4.1, but I don't know the schema of the files. However, from what I read here, it looks like I can only do this with the Enterprise Version's Dynamic Schema thing.
Is there no other way?
you can do it using tMap component. design job like below.
tFileInputExcel--main--TMap---main--youroutput
create metadata for your input file that is excel
then used this metadata in your input component
in Tmap select only required columns in output.
See the image of tMap wherein i am selecting only two columns from input flow.
Enterprise version has many features and dynamic schema is the most important one. But as far as your concern that is not required. it is required if you have variable of schema wherein you don`t know how many columns you will received in your feed.

How to load multiple excel files into different tables based on xls metadata using SSIS?

I have multiple excel files with two types of metadata, Now i have to push the data into two different tables based on metadata of excel files using SSIS.
There are many, many different ways to do this. You'd need to share a lot more information on how your data is structured to really give a great answer, but here's the general strategy I'd suggest.
In the control flow tab, have a separate data flow for each Excel file. The data flows will all work the same, with the exception of having a different Excel source in each data flow, so it will be enough to get the first version working and then copy and paste for the other files.
In the data flow, use a conditional split transformation to read the metadata coming from Excel and send the row to the correct table.
If you really want to be fancy, however, you could create a child package that includes all your data flow logic. Using the Execute Package Task you can pass the Excel file name to the child package for each Excel file you need to import. This way you consolidate your logic in one package and can still import from multiple Excel files in parallel.

Resources