Register/Trademark symbols in vertica - vertica

I have a txt file containing some data.
One of the columns contains Register/Trademark/Copyright symbol in it.
For eg, "DataWeb #symphone ®" and "Copyright © technologies"
Now when I load this txt file in database, all data gets stored properly except these symbols ®©
Are they supported by vertica ? Are there any way to do this ?
Thanks!

Vertica supports Unicode characters encoded UTF-8. Your message is a little bit vague because is not clear what is your problem. If I were you I would double check those characters are properly encoded and your font set is able to visualise them. Here you have a little test...
First let's create a properly UTF-8 encoded file:
$ echo -e "DataWeb #symphone \xc2\xae" > /tmp/test.dat
$ echo -e "Copyright \xc2\xa9 technologies" >> /tmp/test.dat
$ cat /tmp/test.dat
DataWeb #symphone ®
Copyright © technologies
Then let's create/load a table:
$ vsql
SQL> CREATE TABLE public.test ( txt VARCHAR(20) ) ;
SQL> COPY public.test FROM '/tmp/test.dat' ABORT ON ERROR DIRECT;
And, finally, let's query this table:
$ vsql
SQL> SELECT txt FROM public.test ;
txt
---------------------
DataWeb #symphone ®
Copyright © technol
(2 rows)
I'd suggest you to run this test from Linux using vsql command line interface (avoid Win and click-click interfaces).

Related

what is Oracle command 'o;'

I am using the oracle version.
Oracle Database 11g Release 11.2.0.1.0
I accidentally ran the command o; in oracle delveloper.
The result is as below.
The PL/SQL procedure completed successfully.
not spooling currently
The sqlcl_int_runme alias has been removed.
I don't know what I did....
First of all, there seems to be no problem with basic table CRUD.
Has anyone had this experience?
I need an explanation of what happened...
It's an alias.
We copied over some popular commands from postgresql to SQLcl, one of those was 'o'
From the post docs
\o or \out [ filename ] \o or \out [ |command ] Arranges to save
future query results to the file filename or pipe future results to
the shell command command. If no argument is specified, the query
output is reset to the standard output.
If the argument begins with |, then the entire remainder of the line
is taken to be the command to execute, and neither variable
interpolation nor backquote expansion are performed in it. The rest of
the line is simply passed literally to the shell.
“Query results” includes all tables, command responses, and notices
obtained from the database server, as well as output of various
backslash commands that query the database (such as \d); but not error
messages.
SQL> alias
\! \? \c \cd \d \dp \dt \dt+ \e \echo \encoding \i
\o \p \prompt \q \qecho \r \save \timing \w \z clear cls
cpu fuzzy gglag locks sessions tables tables2 topsql
SQL> alias list \o
\o NULLDEFAULTS psql - desc \o [FILE_NAME] - turn spool log file on (or off if no FILE_NAME given)
--------------------------------------------------------------------------------------------------
Declare
maxpos number:=null;
BEGIN
if (:sqlcl_int_first is null) then
:sqlcl_int_runme:='spool off';
else
:sqlcl_int_runme:='spool '||:sqlcl_int_first||' ';
end if;
end;
/
alias NULLDEFAULTS sqlcl_int_runme=:sqlcl_int_runme;
sqlcl_int_runme
alias drop sqlcl_int_runme
To see it in action...
SQL> set sqlformat csv
SQL> o stackoverflow.csv
PL/SQL procedure successfully completed.
Alias sqlcl_int_runme dropped
SQL> select * from regions;
"REGION_ID","REGION_NAME"
1,"Europe"
2,"Americas"
3,"Asia"
4,"Middle East and Africa"
SQL> o
PL/SQL procedure successfully completed.
Alias sqlcl_int_runme dropped
SQL> !dir stackoverflow.csv
Volume in drive C is System
Volume Serial Number is F897-6A6F
Directory of c:\sqlcl\22.2.1\sqlcl\bin
08/30/2022 08:09 AM 170 stackoverflow.csv
1 File(s) 170 bytes
0 Dir(s) 190,156,173,312 bytes free
SQL> !type stackoverflow.csv
Alias sqlcl_int_runme dropped
"REGION_ID","REGION_NAME"
1,"Europe"
2,"Americas"
3,"Asia"
4,"Middle East and Africa"
PL/SQL procedure successfully completed.

Accute accent in flat file PL/SQL

I have a file which has 4500 records , few records has accute accent ..each record has length of 2600..when i process that file say record 1000 has two accute accents in that case V_newline brings only 2598 ignoring those accute accents ..this is happening when i use ANSI format UTF8 working fine..system pulling the file from FTP ..FTP gets file from mainframe..for testing I'm keeping file in my local windows machine
How to solve this..
**v_filehandle := utl_file.fopen ( p_filedir,v_filename,'r',5000 ) ;
LOOP
BEGIN
utl_file.get_line(v_filehandle,v_newline);
DBMS_OUTPUT.put_line('v_line '||v_newline);****
.....

oracle sqlldr not recognizing special characters

I am facing a scenario where the sqlldr is not being able to recognize special characters. I usually don't bother about this as its not important for me to have the exact same names however this led to another issue which is causing the system to malfunction.
unittesting.txt
8888888,John SMITÉ,12345678
unittesting.ctl
load data
CHARACTERSET UTF8
infile 'PATH/unittesting.txt'
INSERT
into table temp_table_name
Fields Terminated By ',' TRAILING NULLCOLS(
ID_NO CHAR(50) "TRIM(:ID_NO)" ,
NAME CHAR(50) "TRIM(:NAME)" ,
ID_NO2 CHAR(50) "TRIM(:ID_NO2)" )
SQLLDR command
sqlldr DB_ID/DB_PASS#TNS
control=PATH/unittesting.ctl
log=PATH/unittesting.log
bad=PATH/unittesting.bad
errors=100000000
OUTPUT from table
|ID_NO |NAME |ID_NO2 |
|8888888 |John SMIT�12345678 | |
Other information about system [RHEL 7.2, Oracle 11G]
export NLS_LANG=AMERICAN_AMERICA.AL32UTF8
select userenv('language') from dual
OUTPUT: AMERICAN_AMERICA.AL32UTF8
file -i unittesting.txt
OUTPUT: unittesting.txt: text/plain; charset=iso-8859-1
echo $LANG
OUTPUT: en_US.UTF-8
Edit:
So i tried to change the encoding as advised by [Cyrille MODIANO] of my file & use it. The issue got resolved.
iconv -f iso-8859-1 -t UTF-8 unittesting.txt -o unittesting_out.txt
My challenge now is that I don't know the character set of the incoming files & its coming from different different sources. The output of file -i i get for my source data file is :
: inode/x-empty; charset=binary
From my understanding, charset=binary means that the character set is unknown. Please advise what I can do in this case. Any small advice/ idea is much appreciated.

Conditional import - how to discard records?

I want to import a csv file using SQLLDR, but I only want specific records. I have solved this with "WHEN record_type = 1" in my control file.
This works but the log file is getting flooded by "Record xxx: Discarded - failed all WHEN clauses." The input files contain millions of records but only a few percent satisfy the condition, so I end up with a log file with the same size as the input file :)
Am I doing this incorrectly?
Is there another way to discard/filter records when using SQLLDR?
Example Data:
record_type;a;b;c
24;a1;b1;c1
17;a2;b2;c2
22;an;bn;cn
1;a1;b1;c1
1;a2;b2;c2
1;an;bn;cn
Control file
load data
truncate
into table my_table_t
WHEN record_type = 1
(...
)
What you do is right IMO.
SQL*Loader is logging to the finest level of the loading details for you. You can opt out from few of the things.
Yo can disable the DISCARD records logging by adding
SILENT=(DISCARDS) to your SQL*Loader
You can refer the DOC for further details.
If you just want to get rid of the log you can send these log to /dev/null if you using Linux/Unix and NUL on Windows.
Example
Data File.
[oracle#ora12c Desktop]$ cat sample.txt
record_type;a;b;c
24;a1;b1;c1
17;a2;b2;c2
22;an;bn;cn
1;a1;b1;c1
1;a2;b2;c2
1;an;bn;cn
Control file.
[oracle#ora12c Desktop]$ cat control.ctl
load data
infile 'sample.txt'
insert
into table table_1 when record_type = '1'
fields terminated by ";"
(record_type, a, b, c)
Lets try to load records.
[oracle#ora12c Desktop]$ sqlldr jay/password#orapdb1 control=control.ctl data=sample.txt log=/dev/null
SQL*Loader: Release 12.1.0.2.0 - Production on Fri Feb 10 16:05:10 2017
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
Path used: Conventional
Commit point reached - logical record count 7
Table TABLE_1:
3 Rows successfully loaded.
Check the log file:
/dev/null
for more information about the load.
There was no log file.
Now we got the only selected records.
SQL> select * from table_1;
RECORD_TYPE A B C
----------- -------------------- -------------------- --------------------
1 a1 b1 c1
1 a2 b2 c2
1 an bn cn
Using the external table, you can then use simple SQL to load your table...
insert into my_table_t( record_type, a, b, c )
select record_type, a, b, c
from my_external_table
where record_type != 1

Oracle dump file table data extraction to file (original exp format)

I have Oracle dump files created with original exp (not expdp) (EXPORT:V10.02.01, Oracle 10g). They contain only table data for four tables.
1) I want to extract the table data into files (flat/fixed-width, CVS, or other text file) without importing them into another Oracle DB. [preferred]
2) Alternatively, I need a solution that can import them into an ordinary user (not SYSDBA) so that I can use other tools to extract the data.
My databases are 11g, but I can find 10g databases if needed. I have TOAD for Oracle Xpert 11.6.1.6 as my disposal. I am a moderately experieinced Oracle programmer, but I haven't worked with EXP/IMP before.
(The information below has been obscured to protect the data.)
Here's how the dump files were created:
exp FILE=data.dmp \
LOG=data.log \
TABLES=USER1.TABLE1,USER1.TABLE2,USER1.TABLE3,USER1.TABLE4 \
INDEXES=N TRIGGERS=N CONSTRAINTS=N GRANTS=N
Here's the log:
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
Note: grants on tables/views/sequences/roles will not be exported
Note: indexes on tables will not be exported
Note: constraints on tables will not be exported
About to export specified tables via Conventional Path ...
Current user changed to USER1
. . exporting table TABLE1 271 rows exported
. . exporting table TABLE2 272088 rows exported
. . exporting table TABLE3 2770 rows exported
. . exporting table TABLE4 21041 rows exported
Export terminated successfully without warnings.
Thank you in advance.
UPDATE:
TOAD version 9.7.2 will read a "dmp" file generated by EXP.
Select DATABASE -> EXPORT -> EXPORT FILE BROWSER from the menus.
You need to have the DBA utilities for TOAD installed. There is no real guarantee that the file is parsed
correctly, but the data will show up in TOAD in the schema browser.
NOTE: The only other known utility that will a dmp file generated by the exp utility is the imputility. You cannot read the dump file yourself. If you do, you run the risk of parsing the file incorrectly.
If you already have the data in an ORACLE table:
To extract the table data into a file, create a shell script that calls SQL*PLUS and causes SQL*PLUS to spool the table data to a file. You need one script per table.
#!/bin/sh
#NOTE: The path to sqlplus will vary on your system,
# but it is generally $ORACLE_HOME/bin/sqlplus.
#YOU NEED TO UNCOMMENT THESE LINES AND SET APPROPRIATELY.
#export ORACLE_SID=YOUR_SID
#export ORACLE_HOME=PATH_TO_YOUR_ORACLE_HOME
#export PATH=$PATH:$ORACLE_HOME/bin
sqlplus -s user/pwd#db << EOF
set pagesize 0
set linesize 0
set linesize 255
set heading off
set echo off
SPOOL TABLE1_DATA.txt
REM FOR EACH COLUMN IN TABLE1, SET THE FORMAT
COL FIELD_ID format 999,999,999
COL FIELD_DATA format a99
select FIELD_ID,FIELD_DATA from TABLE1;
SPOOL OFF
EOF
Make sure you set the line size of each line and set the format of each column. See FIELD_ID above for a number format column and FIELD_DATA for a character column.
NOTE: You need to remove the "N rows selected" from the end of the file.
(You can still import the file you created into another schema using the imputility.)

Resources