export utf-8 data to text file with oracle sql developer - oracle

I need to export urf-8 data to a text file in form of insert statement. I've tried Oracle SQL developer's export data tool. But it use ansi encoding to export data to txt file so utf-8 characters turn to "?" mark. I wonder how whether it's possible to use utf-8 encoding to export data ?
Thank you.

You can export data in UTF-8 with Sql Developer. Make sure your client NLS parameters are setup correctly. If you're on windows these settings are in the registry. Make sure also that in the settings panel of Sql Developer you have selected UTF-8 as the encoding character set.
See Also
Oracle Unicode Spooling

Related

Oracle Report6i - Export to Excel Option/Button

Please guide me that how to enable option/button on oracle report 6i to export it into excel format
Waiting for your response
BR
Javed Akram
You can't, as far as I can tell. Reports 6i don't have option of saving result into native MS Excel format.
The closest thing you could do is to set desformat to delimited:
Means that the report output is sent to a file that can be read by standard spreadsheet utilities, such as Microsoft Excel. If you do not choose a delimiter, then the default delimiter is a TAB.
(see documentation)

Output file isn't encoded in UTF-8 when using a standalone Talend job

I have a simple Talend job that reads a CSV file as entry, sends a SOAP request to a webservice and then returns some fields of the response in a CSV file as output. The job deals with adresses throughout Europe, so the various fields of the output can have accents or non-latin characters (e.g. for adresses in Belarus) in them.
When I run the job inside Talend Open Studio, my output file is correctly encoded in UTF-8 and all the special characters appear fine when I open the file in Notepad++. However, when I export the job as a standalone (using the "Build Job" menu option) and run the .bat file, none of the special characters are correctly encoded. When I open the file in Notepad++ it clearly says that it's encoded in UTF-8, but the end result is still wrong.
Am I missing something, or doing something wrong? I haven't found any option in Talend besides choosing "UTF-8" as encoding in the advanced options of my tFileOutputDelimited component.
Thanks in advance for your help
Passing -Dfile.encoding=UTF-8 as an argument to your JVM should solve the issue.
In order to set this in Talend, you can use the advanced settings tab in the Run view, and add a JVM argument: -Dfile.encoding=UTF-8
You can set this globally in Talend preferences as well: Windows > Preferences > Talend > Run/Debug

In sqlplus how to change or specific file encoding to utf8

in sql plus I created a file :
>EDIT test.sql
the file encoding is ansi. I want to change it to ut8, I tried by save-as but it didnt work
how to change test.sql from ansi to utf8 in sql plus ?
If you want to change your default editor for sqlplus, you can do this:
logon to sqlplus
use "define" command to set the path to the new editor. I like TextPad (which handles utf8 automatically), so the command (for my machine anyway) would be:
define _editor="c:\Program Files\TextPad 7\TextPad.exe";
Now, the edit command will bring up the new editor.
For TextPad, you can change the encoding by doing the following:
Conversion: Conversion between various file formats and encodings can
be made using the Save As command on the File menu. The options for
encoding are ANSI, DOS, Unicode, Unicode (big endian) and UTF-8.
So after doing the steps above, edit will bring up Textpad, and using Save As you can save the file using various encodings.
This is one benefit of changing the default editor from Notepad.

Oracle spool UTF-8 without BOM at beginning of file

I spool data from Oracle to a csv-file.
When I open the csv in editor all signs are shown correctly.
But in Excel or Kofax the file special characters (e.g. ß or ö)
are not displayed correct.
Could it be an option to select the missing BOM characters to
the beginning of the file? If yes, how should I do that?
If not, is there an other possibility to do it?
Best regards
Patrick
I assume you use SQL*Plus on Windows to spool the data.
In this case SQL*Plus inherits the codepage from cmd.exe. This can be interrogated with command chcp and is most likely CP850 or CP437
Spooled files are written in this encoding. While CP850 or CP437 are very common for console, they are hardly supported by any applications or editors.
Change your codepage to Windows-1252 for example. However, you must also tell Oracle (i.e. the Oracle drivers) that you are using Windows-1252, this is done by NLS_LANG environment value:
chcp 1252
set NLS_LANG=GERMAN_GERMANY.WE8MSWIN1252
Then launch your SQL*Plus and everything in spooled files should be fine.

Exporting to CSV (using built in tools) - incorrect characters

When I export data from a screen using built-in show-csv-button="true", the application prints incorrect characters on places of special characters (the EUR sign, accent marks). Can I fix it by changing character set?
This seems to be an issue of managing the way MS Excel loads CSV. Opening it directly from browser downloads causes the problem. When I import the file in a standard way, letting Excel check the character set, data is OK.
Using Excel's import utility, it recognizes the UTF-8.
When opening from OS, 1250: Central European is used.

Resources