This should be simple, but what is the correct syntax for an Oracle sqlldr control file in order to specify tab delimited/separated data?
FWIW, I found the file was UTF16 and not UTF8, and was editing fine but would introduce null bytes as Oracle read the control file. Can't even replicate today.
Per this thread, it should be fields terminated by '\t' (don't have an Oracle installation at hand to verify that this is correct).
Related
I have a ddl script to create some tables but the data is in .ctl files and I never use it before. I did some researches but I didn't quite understand how to use SQLLDR. How it works? Can I use some other way to execute the .ctl file? I'm just using PL/SQL and Oracle 10G
The way you put it, it would go like this:
using DDL script, create all those tables
if CTL files contain data, I presume it is within the BEGINDATA section. Fine, couldn't be better because - as soon as you run the loader, it'll know where to find data to be loaded (it also means that control file uses infile *, right?)
you have to have access to SQL*Loader
if you can connect to the database server, it is there
if you're using your own PC, see whether it is installed
along with the Client software
or, you might even have a database on your PC (XE?)
once you have it (the sqlldr.exe), make sure its directory is contained with the PATH environment variable, or - if not - invoke it by specifying the whole path to it
open command prompt of your operating system
navigate to directory that contains CTL files
run the loader as
sqlldr scott/tiger control=file1.ctl log=file1.log
If everything is OK, data will be loaded. Check log files!
I wish to import data from excel into oracle table.
But my requirement is I Have multiple excel files and each file contains multiple sheets.
But all have the same structure.
Please let me know the suitable way to perform the same.
Can UTL_FILE utitlity be used to perform this extraction.
I don't know how to do it directly from Excel.
Actually, I do, but manually, using TOAD's "Import table data" option. I also don't know what technique stands behind the scene, can only guess:
- it is first temporarily saved as CSV and then loaded
- Excel files are XML so TOAD manipulates such a data
How would I automate it?
save each worksheet into its own CSV file
manual job for me as well. Maybe someone - who knows Excel far better - can write a VBA script or something like that. On the other hand, maybe it is possible to use such a script to directly insert data into an Oracle table ... no idea, sorry
write a SQL Loader control file
load file-by-file, reusing the same control file
how to automate that? Use SQL Loader command line DATA parameter.
how to change it dynamically? Create a wrapper script (on MS Windows, that would be a .BAT script) which would - in a loop - iterate over all those CSV files and feed SQL Loader with new data in every iteration
SQL Loader+s advantage over UTL_FILE (you mentioned) is that
it works locally (on your own PC; you have to install it, of course (if you don't have it already). It comes along with any Oracle database (even XE, Express Edition), or is contained in Oracle Client software under its Utilities), while
for UTL_FILE you have to speak to your DBA in order to get access to the directory (usually located on the database server) which would be then used by the named package.
The xml report generates a huge file (xls) of arounf 800 MB- 1 GB. The system hangs when trying to open the file. Tried opening a 400 MB xls file and saved it as xlsb which reduced the file size to 4.5 MB. Is there a way to generate the output in xlsb format instead of default xls. Oracle Apps version is 12.2.6.
You can use E-text templates to produce a comma separated value (csv) file. It’s usually used for EFT transfers for banks. But you can make it do whatever you want. Since it’s only text, it won’t have any of the formatting markup that an BI publisher would add to an Excel file. You can then open it up in Excel and do what you wish with it.
As far as I'm aware, BI Publisher can not do it and Oracle has an enhancement request logged for this: Bug 24545689 : BI PUBLISHER EXCEL .XLSX TEMPLATE
For large data exports from Oracle EBS into native .xlsx format, you can use a third party solution such as our Blitz Report: www.enginatics.com/faq/#how-does-blitz-report-compare-with-oracle-bi-publisher
It is free for limited use.
I have a problem when export database to dump file in oracle 11g xe.
It's run success but my dump file have wrong file name when i used japanese.
This is my command to export dump file:
EXPDP test/123 TABLES=t_tprt_kki_kmk_mpg_mstr DIRECTORY=BACKUP_DIR DUMPFILE=テンプレート公開項目マッピングマスタ.dmp LOGFILE=テンプレート公開項目マッピングマスタ.log
And my file retrived: ウンシレーエ公開項目マィゴングマスタ.DMP. I think that may be due to uppercase. I used nls_lang to set language and charset.
Please help me solved it.
I don't think you can solve it. There are a couple of bugs on MOS (e.g. 22004180, 22004268 - though for 12c) which refer to garbled dumpfile names when multibyte characters are used (both examples happen to refer to Japenese but it's probably more general than that), which have been closed as not-a-bug. Which seems odd as that isn't listed as a restriction in the documentation.
The only 'workaround' seems to be to not use multibyte characters in the file name, which doesn't really help you.
You could export with a singlebyte-character-only name and then rename the file at operating system level; which is a bit of a pain, and you may find a similar issue on import unless you rename it back to singlebyte characters for that too.
I always have issues importing csv (products) into Magneto. No matter what I do I always get errors that don't make any sense and I can never import anything. I am on a mac and I've read that this can cause issues so I've been updating everything in google docs and then downloading it as a CSV. The current error I have is:
"Can not find required columns: sku"
Which is a column in my CSV file.
If you're using a Mac, you need to ensure that you are saving your CSV:
In UTF-8 encoding
as CSV (Windows) file type
If you want to deal with it manually, what you'll need to do is find and replace the line breaks in the file. From the command line:
tr '\r' '\n' < file_excel_munged.csv >| fixed_file.csv
But just saving it was a Windows CSV/Excel file should do the trick.
I resolved this same issue today by saving as UTF-8 without BOM. By default I believe Excel will save with a BOM. Magento should really be smart enough to recognize the BOM imho.