How can to retrieve and write the BLOB in my storage in oracle - oracle

I have a table with this script:
CREATE TABLE PRSNL_IMG
(
PER_IMG LONG RAW,
NATIONAL_COD NUMBER
);
Now I want to retrieve and write the BLOB ( PER_IMG type: LONG RAW ) into a file with name "NATIONAL_COD" on path on hard sush as "D:\Photo_OUT\" from the database with name .My table have 82000 records.I have work with form6i and my database version is oracle 11g.
please help me.

You can create a table with BLOB like this
CREATE GLOBAL TEMPORARY TABLE foo (
b BLOB);
INSERT INTO foo VALUES ('1f8b080087cdc1520003f348cdc9c9d75128cf2fca49d1e30200d7bbcdfc0e000000');
and just select it
SELECT b FROM foo;
-- (BLOB)
SELECT UTL_RAW.CAST_TO_RAW(UTL_RAW.CAST_TO_VARCHAR2(b))
FROM foo;
-- 1F8B080087CDC1520003F348CDC9C9D75128CF2FCA49D1E30200D7BBCDFC0E000000
If the content is not too large, you can also use
SELECT CAST ( <blobfield> AS RAW( <maxFieldLength> ) ) FROM <table>;

Related

Oracle:inner join between file and table

I'm new to oracle and plsql, so just bear with me.
I have a file TYPES.txt,
id,name,values
1,aaa,32
2,bbb,23
3,cvv,12
4,fff,54
I also have a table in my db, PARTS.ATTRIBUTES
id,name,props,crops
1,aaa,100,zzzz
2,bbb,200,yyyy
3,cvv,300,xxxx
4,fff,400,wwww
5,sasa,343,gfgg
6,uyuy,897,hhdf
I'd like to do an INNER JOIN on the file TYPES and ATTRIBUTES based on the column name. Now, i have done this by initially loading file TYPES into a temp table and then doing INNER JOIN between the temp table and ATTRIBUTES table.
But i'd like to know whether it is possible to do INNER JOIN between TYPES file and ATTRIBUTES table without making use of a temp table.
I understand that i can load the file using and get respective rows using following script:
declare
file utl_file.file_type;
line varchar2(500);
begin
file :=utl_file.fopen('USER_DIR','TYPES.txt','r');
loop
utl_file.get_line(file ,line);
dbms_output.put_line(line);
end loop;
exception
when others then
utl_file.fclose(file);
end;
Could someone be kind enough to explain to me how i can do the join between file contents and the db table?
P.S. The file TYPES.txt is dynamically generated and can have different number of columns at different times.
One cleaner approach is to use an EXTERNAL TABLE.
Use a create statement like this to create TYPES_external table.
CREATE TABLE TYPES_external (
id NUMBER(5),
name VARCHAR2(50),
Values VARCHAR2(50)
)
ORGANIZATION EXTERNAL (
TYPE ORACLE_LOADER
DEFAULT DIRECTORY USER_DIR
ACCESS PARAMETERS (
RECORDS DELIMITED BY NEWLINE
FIELDS TERMINATED BY ','
MISSING FIELD VALUES ARE NULL
(
id NUMBER(5),
name VARCHAR2(50),
Values VARCHAR2(50)
)
)
LOCATION ('TYPES.txt','TYPES.txt')
)
PARALLEL 5
REJECT LIMIT UNLIMITED;
Once created , you can use this external table(TYPES_external) just as you
use any database table for select operation.

Want to create Stored Procedure of new table with calculated variable

I want to create a stored procedure using the table from Oracle and want to access in SAS EG.
I have below code .
create table xyz as
select * from (
select a,b,c,d
from table_name
)
pivot (MIN('X') for Variable_name in (
'PQR' as PQR, 'PGT' as PGT, 'KLD' as KLD,
'opd' as opd
)
)
order by Variable_name;
Is it possible to make this table as stored procedure? And if not then please suggest solution.
If I'm understanding your problem correctly you should be able to do this using SQL pass-through. You will just need to specify the Oracle database in the libname statement in SAS which will look something like this:
libname mydblib oracle user=user_name password=pw path='myoracleserver';
You can then access this using proc sql in SAS.
proc sql;
connect using mydblib;
create table xyz as
select * from connection to mydblib (
select * from (
select a,b,c,d
from table_name
)
pivot (MIN('X') for Variable_name in (
'PQR' as PQR, 'PGT' as PGT, 'KLD' as KLD,
'opd' as opd
)
)
order by Variable_name
);
quit;
This code creates the table xyz in your SAS work library.

How do you insert data into complex data type "Struct" in Hive

I'm completely new to Hive and Stack Overflow. I'm trying to create a table with complex data type "STRUCT" and then populate it using INSERT INTO TABLE in Hive.
I'm using the following code:
CREATE TABLE struct_test
(
address STRUCT<
houseno: STRING
,streetname: STRING
,town: STRING
,postcode: STRING
>
);
INSERT INTO TABLE struct_test
SELECT NAMED_STRUCT('123', 'GoldStreet', London', W1a9JF') AS address
FROM dummy_table
LIMIT 1;
I get the following error:
Error while compiling statement: FAILED: semanticException [Error
10044]: Cannot insert into target because column number type are
different 'struct_test': Cannot convert column 0 from struct to
array>.
I was able to use similar code with success to create and populate a data type Array but am having difficulty with Struct. I've tried lots of code examples I've found online but none of them seem to work for me... I would really appreciate some help on this as I've been stuck on it for quite a while now! Thanks.
your sql error. you should use sql:
INSERT INTO TABLE struct_test
SELECT NAMED_STRUCT('houseno','123','streetname','GoldStreet', 'town','London', 'postcode','W1a9JF') AS address
FROM dummy_table LIMIT 1;
You can not insert complex data type directly in Hive.For inserting structs you have function named_struct. You need to create a dummy table with data that you want to be inserted in Structs column of desired table.
Like in your case create a dummy table
CREATE TABLE DUMMY ( houseno: STRING
,streetname: STRING
,town: STRING
,postcode: STRING);
Then to insert in desired table do
INSERT INTO struct_test SELECT named_struct('houseno',houseno,'streetname'
,streetname,'town',town,'postcode',postcode) from dummy;
No need to create any dummy table : just use command :
insert into struct_test
select named_struct("houseno","house_number","streetname","xxxy","town","town_name","postcode","postcode_name");
is Possible:
you must give the columns names in sentence from dummy or other table.
INSERT INTO TABLE struct_test
SELECT NAMED_STRUCT('houseno','123','streetname','GoldStreet', 'town','London', 'postcode','W1a9JF') AS address
FROM dummy
Or
INSERT INTO TABLE struct_test
SELECT NAMED_STRUCT('houseno',tb.col1,'streetname',tb.col2, 'town',tb.col3, 'postcode',tb.col4) AS address
FROM table1 as tb
CREATE TABLE IF NOT EXISTS sunil_table(
id INT,
name STRING,
address STRUCT<state:STRING,city:STRING,pincode:INT>)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '.';
INSERT INTO sunil_table 1,"name" SELECT named_struct(
"state","haryana","city","fbd","pincode",4500);???
how to insert both (normal and complex)data into table

Assign formats/informats for Oracle dataset in SAS

I use SAS/ACCESS path-through facility in order to create a table in Oracle:
proc sql;
connect to oracle(user=mylogin orapw=mypw path=mypath);
execute (
create table FCFCORE.RUS_FSC_RATE
( DATE_KEY NUMBER(8),
RATE NUMBER(20,10),
)
) by oracle;
disconnect from oracle;
quit;
When in EG I check properties of the table I see that DATE_KEY has format/informat 9., and RATE has informat 22.10. How can I change these formats?
If I use proc datasets the following error occures: ERROR: The HEADER/VARIABLE UPDATE function is not supported by the ORACLE engine.
You can use the DBSASTYPE data step option to override the default.
Example:
proc append base=dblib.hrdata (dbsastype=(empid='CHAR(20)'))
data=saslib.personnel;
run;

How to compare two Oracle schemas to get delta changes by Altering the table not to drop and recreate it

I've already tried out a tool named TOYS. I found it free but unfortunately it didn't work.
Then, I tried "RED-Gate Schema Compare for Oracle" but it uses the technique to drop and recreate the table mean while I need to just alter the table with the newly added/dropped columns.
Any help is highly appreciated
Thanks
Starting from Oracle 11g you could use dbms_metadata_diff package and specifically compare_alter() function to compare metadata of two schema objects:
Schema #1 HR
create table tb_test(
col number
)
Schema #2 HR2
create table tb_test(
col_1 number
)
select dbms_metadata_diff.compare_alter( 'TABLE' -- schema object type
, 'TB_TEST' -- object name
, 'TB_TEST' -- object name
, 'HR' -- by default current schema
, 'HR2'
) as res
from dual;
Result:
RES
-------------------------------------------------
ALTER TABLE "HR"."TB_TEST" ADD ("COL_1" NUMBER);
ALTER TABLE "HR"."TB_TEST" DROP ("COL");

Resources